00:00:00.001 Started by upstream project "autotest-per-patch" build number 132806 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.115 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.116 The recommended git tool is: git 00:00:00.116 using credential 00000000-0000-0000-0000-000000000002 00:00:00.119 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.161 Fetching changes from the remote Git repository 00:00:00.165 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.205 Using shallow fetch with depth 1 00:00:00.205 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.205 > git --version # timeout=10 00:00:00.239 > git --version # 'git version 2.39.2' 00:00:00.239 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.259 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.259 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.744 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.757 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.770 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.770 > git config core.sparsecheckout # timeout=10 00:00:05.782 > git read-tree -mu HEAD # timeout=10 00:00:05.798 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.820 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.821 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.919 [Pipeline] Start of Pipeline 00:00:05.932 [Pipeline] library 00:00:05.933 Loading library shm_lib@master 00:00:05.934 Library shm_lib@master is cached. Copying from home. 00:00:05.951 [Pipeline] node 00:00:05.965 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.966 [Pipeline] { 00:00:05.977 [Pipeline] catchError 00:00:05.979 [Pipeline] { 00:00:05.992 [Pipeline] wrap 00:00:06.000 [Pipeline] { 00:00:06.007 [Pipeline] stage 00:00:06.009 [Pipeline] { (Prologue) 00:00:06.269 [Pipeline] sh 00:00:06.553 + logger -p user.info -t JENKINS-CI 00:00:06.568 [Pipeline] echo 00:00:06.569 Node: WFP4 00:00:06.575 [Pipeline] sh 00:00:06.872 [Pipeline] setCustomBuildProperty 00:00:06.882 [Pipeline] echo 00:00:06.883 Cleanup processes 00:00:06.888 [Pipeline] sh 00:00:07.171 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.171 1625438 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.184 [Pipeline] sh 00:00:07.471 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.471 ++ grep -v 'sudo pgrep' 00:00:07.472 ++ awk '{print $1}' 00:00:07.472 + sudo kill -9 00:00:07.472 + true 00:00:07.484 [Pipeline] cleanWs 00:00:07.493 [WS-CLEANUP] Deleting project workspace... 00:00:07.493 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.500 [WS-CLEANUP] done 00:00:07.504 [Pipeline] setCustomBuildProperty 00:00:07.515 [Pipeline] sh 00:00:07.794 + sudo git config --global --replace-all safe.directory '*' 00:00:07.865 [Pipeline] httpRequest 00:00:08.285 [Pipeline] echo 00:00:08.287 Sorcerer 10.211.164.112 is alive 00:00:08.295 [Pipeline] retry 00:00:08.297 [Pipeline] { 00:00:08.311 [Pipeline] httpRequest 00:00:08.315 HttpMethod: GET 00:00:08.315 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.316 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.331 Response Code: HTTP/1.1 200 OK 00:00:08.331 Success: Status code 200 is in the accepted range: 200,404 00:00:08.332 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.408 [Pipeline] } 00:00:14.427 [Pipeline] // retry 00:00:14.436 [Pipeline] sh 00:00:14.721 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.736 [Pipeline] httpRequest 00:00:15.110 [Pipeline] echo 00:00:15.112 Sorcerer 10.211.164.112 is alive 00:00:15.122 [Pipeline] retry 00:00:15.124 [Pipeline] { 00:00:15.139 [Pipeline] httpRequest 00:00:15.144 HttpMethod: GET 00:00:15.144 URL: http://10.211.164.112/packages/spdk_608f2e392e65db2e5005d2a0f701ca071e8fd5d2.tar.gz 00:00:15.145 Sending request to url: http://10.211.164.112/packages/spdk_608f2e392e65db2e5005d2a0f701ca071e8fd5d2.tar.gz 00:00:15.166 Response Code: HTTP/1.1 200 OK 00:00:15.166 Success: Status code 200 is in the accepted range: 200,404 00:00:15.169 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_608f2e392e65db2e5005d2a0f701ca071e8fd5d2.tar.gz 00:01:29.949 [Pipeline] } 00:01:29.967 [Pipeline] // retry 00:01:29.976 [Pipeline] sh 00:01:30.264 + tar --no-same-owner -xf spdk_608f2e392e65db2e5005d2a0f701ca071e8fd5d2.tar.gz 00:01:32.819 [Pipeline] sh 00:01:33.138 + git -C spdk log --oneline -n5 00:01:33.138 608f2e392 test/check_so_deps: use VERSION to look for prior tags 00:01:33.138 6584139bf build: use VERSION file for storing version 00:01:33.138 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:33.138 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:01:33.138 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:01:33.150 [Pipeline] } 00:01:33.164 [Pipeline] // stage 00:01:33.173 [Pipeline] stage 00:01:33.175 [Pipeline] { (Prepare) 00:01:33.194 [Pipeline] writeFile 00:01:33.210 [Pipeline] sh 00:01:33.497 + logger -p user.info -t JENKINS-CI 00:01:33.510 [Pipeline] sh 00:01:33.795 + logger -p user.info -t JENKINS-CI 00:01:33.809 [Pipeline] sh 00:01:34.096 + cat autorun-spdk.conf 00:01:34.096 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.096 SPDK_TEST_NVMF=1 00:01:34.096 SPDK_TEST_NVME_CLI=1 00:01:34.096 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.096 SPDK_TEST_NVMF_NICS=e810 00:01:34.096 SPDK_TEST_VFIOUSER=1 00:01:34.096 SPDK_RUN_UBSAN=1 00:01:34.096 NET_TYPE=phy 00:01:34.104 RUN_NIGHTLY=0 00:01:34.109 [Pipeline] readFile 00:01:34.136 [Pipeline] withEnv 00:01:34.138 [Pipeline] { 00:01:34.152 [Pipeline] sh 00:01:34.438 + set -ex 00:01:34.438 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:34.438 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:34.438 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.438 ++ SPDK_TEST_NVMF=1 00:01:34.438 ++ SPDK_TEST_NVME_CLI=1 00:01:34.438 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.438 ++ SPDK_TEST_NVMF_NICS=e810 00:01:34.438 ++ SPDK_TEST_VFIOUSER=1 00:01:34.438 ++ SPDK_RUN_UBSAN=1 00:01:34.438 ++ NET_TYPE=phy 00:01:34.438 ++ RUN_NIGHTLY=0 00:01:34.438 + case $SPDK_TEST_NVMF_NICS in 00:01:34.438 + DRIVERS=ice 00:01:34.438 + [[ tcp == \r\d\m\a ]] 00:01:34.438 + [[ -n ice ]] 00:01:34.438 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:34.438 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:34.438 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:34.438 rmmod: ERROR: Module i40iw is not currently loaded 00:01:34.438 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:34.438 + true 00:01:34.438 + for D in $DRIVERS 00:01:34.438 + sudo modprobe ice 00:01:34.438 + exit 0 00:01:34.448 [Pipeline] } 00:01:34.463 [Pipeline] // withEnv 00:01:34.468 [Pipeline] } 00:01:34.482 [Pipeline] // stage 00:01:34.492 [Pipeline] catchError 00:01:34.493 [Pipeline] { 00:01:34.506 [Pipeline] timeout 00:01:34.506 Timeout set to expire in 1 hr 0 min 00:01:34.508 [Pipeline] { 00:01:34.521 [Pipeline] stage 00:01:34.523 [Pipeline] { (Tests) 00:01:34.537 [Pipeline] sh 00:01:34.826 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:34.826 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:34.826 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:34.826 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:34.826 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:34.826 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:34.826 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:34.826 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:34.826 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:34.826 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:34.826 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:34.826 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:34.826 + source /etc/os-release 00:01:34.826 ++ NAME='Fedora Linux' 00:01:34.826 ++ VERSION='39 (Cloud Edition)' 00:01:34.826 ++ ID=fedora 00:01:34.826 ++ VERSION_ID=39 00:01:34.826 ++ VERSION_CODENAME= 00:01:34.826 ++ PLATFORM_ID=platform:f39 00:01:34.826 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:34.826 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:34.826 ++ LOGO=fedora-logo-icon 00:01:34.826 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:34.826 ++ HOME_URL=https://fedoraproject.org/ 00:01:34.826 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:34.826 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:34.826 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:34.826 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:34.826 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:34.826 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:34.826 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:34.826 ++ SUPPORT_END=2024-11-12 00:01:34.826 ++ VARIANT='Cloud Edition' 00:01:34.826 ++ VARIANT_ID=cloud 00:01:34.826 + uname -a 00:01:34.826 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:34.826 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:37.366 Hugepages 00:01:37.366 node hugesize free / total 00:01:37.366 node0 1048576kB 0 / 0 00:01:37.366 node0 2048kB 0 / 0 00:01:37.366 node1 1048576kB 0 / 0 00:01:37.366 node1 2048kB 0 / 0 00:01:37.366 00:01:37.366 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:37.366 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:37.366 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:37.366 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:37.366 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:37.366 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:37.366 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:37.366 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:37.366 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:37.366 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:37.366 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:37.366 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:37.366 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:37.366 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:37.366 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:37.366 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:37.366 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:37.366 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:37.366 + rm -f /tmp/spdk-ld-path 00:01:37.366 + source autorun-spdk.conf 00:01:37.366 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.366 ++ SPDK_TEST_NVMF=1 00:01:37.366 ++ SPDK_TEST_NVME_CLI=1 00:01:37.366 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.366 ++ SPDK_TEST_NVMF_NICS=e810 00:01:37.366 ++ SPDK_TEST_VFIOUSER=1 00:01:37.366 ++ SPDK_RUN_UBSAN=1 00:01:37.366 ++ NET_TYPE=phy 00:01:37.366 ++ RUN_NIGHTLY=0 00:01:37.366 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:37.366 + [[ -n '' ]] 00:01:37.366 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:37.366 + for M in /var/spdk/build-*-manifest.txt 00:01:37.366 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:37.366 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.366 + for M in /var/spdk/build-*-manifest.txt 00:01:37.366 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:37.366 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.366 + for M in /var/spdk/build-*-manifest.txt 00:01:37.366 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:37.366 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.366 ++ uname 00:01:37.366 + [[ Linux == \L\i\n\u\x ]] 00:01:37.366 + sudo dmesg -T 00:01:37.626 + sudo dmesg --clear 00:01:37.626 + dmesg_pid=1626489 00:01:37.626 + [[ Fedora Linux == FreeBSD ]] 00:01:37.626 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.626 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.626 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:37.626 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:37.626 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:37.626 + [[ -x /usr/src/fio-static/fio ]] 00:01:37.626 + sudo dmesg -Tw 00:01:37.626 + export FIO_BIN=/usr/src/fio-static/fio 00:01:37.626 + FIO_BIN=/usr/src/fio-static/fio 00:01:37.626 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:37.626 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:37.626 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:37.626 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.626 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.626 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:37.626 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.626 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.626 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.626 17:12:04 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:37.626 17:12:04 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.626 17:12:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.626 17:12:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:37.626 17:12:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:37.626 17:12:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.626 17:12:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:37.626 17:12:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:37.626 17:12:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:37.626 17:12:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:37.626 17:12:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:37.626 17:12:04 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:37.626 17:12:04 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.626 17:12:04 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:37.626 17:12:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:37.626 17:12:04 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:37.626 17:12:04 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:37.626 17:12:04 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:37.626 17:12:04 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:37.626 17:12:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.626 17:12:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.626 17:12:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.626 17:12:04 -- paths/export.sh@5 -- $ export PATH 00:01:37.626 17:12:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.626 17:12:04 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:37.626 Traceback (most recent call last): 00:01:37.626 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:01:37.626 import spdk.rpc as rpc # noqa 00:01:37.626 ^^^^^^^^^^^^^^^^^^^^^^ 00:01:37.626 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/__init__.py", line 5, in 00:01:37.626 from .version import __version__ 00:01:37.626 ModuleNotFoundError: No module named 'spdk.version' 00:01:37.626 17:12:04 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:37.626 17:12:04 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733760724.XXXXXX 00:01:37.626 17:12:04 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733760724.Aguxub 00:01:37.626 17:12:04 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:37.626 17:12:04 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:37.626 17:12:04 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:37.626 17:12:04 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:37.626 17:12:04 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:37.626 17:12:04 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:37.626 17:12:04 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:37.626 17:12:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.626 17:12:04 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:37.626 17:12:04 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:37.626 17:12:04 -- pm/common@17 -- $ local monitor 00:01:37.626 17:12:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.627 17:12:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.627 17:12:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.627 17:12:04 -- pm/common@21 -- $ date +%s 00:01:37.627 17:12:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.627 17:12:04 -- pm/common@21 -- $ date +%s 00:01:37.627 17:12:04 -- pm/common@25 -- $ sleep 1 00:01:37.627 17:12:04 -- pm/common@21 -- $ date +%s 00:01:37.627 17:12:04 -- pm/common@21 -- $ date +%s 00:01:37.627 17:12:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733760724 00:01:37.627 17:12:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733760724 00:01:37.627 17:12:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733760724 00:01:37.627 17:12:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733760724 00:01:37.627 Traceback (most recent call last): 00:01:37.627 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:01:37.627 import spdk.rpc as rpc # noqa 00:01:37.627 ^^^^^^^^^^^^^^^^^^^^^^ 00:01:37.627 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/__init__.py", line 5, in 00:01:37.627 from .version import __version__ 00:01:37.627 ModuleNotFoundError: No module named 'spdk.version' 00:01:37.886 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733760724_collect-cpu-load.pm.log 00:01:37.886 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733760724_collect-vmstat.pm.log 00:01:37.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733760724_collect-cpu-temp.pm.log 00:01:37.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733760724_collect-bmc-pm.bmc.pm.log 00:01:38.825 17:12:05 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:38.825 17:12:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.825 17:12:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.825 17:12:05 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.825 17:12:05 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.825 Mon Dec 9 04:12:05 PM UTC 2024 00:01:38.825 17:12:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.825 v25.01-pre-305-g608f2e392 00:01:38.825 17:12:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:38.825 17:12:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.825 17:12:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.825 17:12:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:38.825 17:12:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:38.825 17:12:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.825 ************************************ 00:01:38.825 START TEST ubsan 00:01:38.825 ************************************ 00:01:38.825 17:12:05 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:38.825 using ubsan 00:01:38.825 00:01:38.825 real 0m0.000s 00:01:38.825 user 0m0.000s 00:01:38.825 sys 0m0.000s 00:01:38.825 17:12:05 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:38.825 17:12:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.825 ************************************ 00:01:38.825 END TEST ubsan 00:01:38.825 ************************************ 00:01:38.825 17:12:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:38.825 17:12:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:38.825 17:12:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:38.825 17:12:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:38.825 17:12:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:38.825 17:12:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:38.825 17:12:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:38.825 17:12:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:38.825 17:12:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:39.085 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:39.085 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:39.344 Using 'verbs' RDMA provider 00:01:52.135 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:04.354 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:04.354 Creating mk/config.mk...done. 00:02:04.354 Creating mk/cc.flags.mk...done. 00:02:04.354 Type 'make' to build. 00:02:04.354 17:12:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:04.354 17:12:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:04.354 17:12:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:04.354 17:12:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.354 ************************************ 00:02:04.354 START TEST make 00:02:04.354 ************************************ 00:02:04.354 17:12:30 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:06.274 The Meson build system 00:02:06.274 Version: 1.5.0 00:02:06.274 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:06.274 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:06.274 Build type: native build 00:02:06.274 Project name: libvfio-user 00:02:06.274 Project version: 0.0.1 00:02:06.274 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.274 C linker for the host machine: cc ld.bfd 2.40-14 00:02:06.274 Host machine cpu family: x86_64 00:02:06.274 Host machine cpu: x86_64 00:02:06.274 Run-time dependency threads found: YES 00:02:06.274 Library dl found: YES 00:02:06.274 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.274 Run-time dependency json-c found: YES 0.17 00:02:06.274 Run-time dependency cmocka found: YES 1.1.7 00:02:06.274 Program pytest-3 found: NO 00:02:06.274 Program flake8 found: NO 00:02:06.274 Program misspell-fixer found: NO 00:02:06.274 Program restructuredtext-lint found: NO 00:02:06.274 Program valgrind found: YES (/usr/bin/valgrind) 00:02:06.274 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.274 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.275 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.275 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:06.275 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:06.275 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:06.275 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:06.275 Build targets in project: 8 00:02:06.275 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:06.275 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:06.275 00:02:06.275 libvfio-user 0.0.1 00:02:06.275 00:02:06.275 User defined options 00:02:06.275 buildtype : debug 00:02:06.275 default_library: shared 00:02:06.275 libdir : /usr/local/lib 00:02:06.275 00:02:06.275 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.841 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:06.841 [1/37] Compiling C object samples/null.p/null.c.o 00:02:06.841 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:06.841 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:06.841 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:06.841 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:06.841 [6/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:06.841 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:06.841 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:06.841 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:06.841 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:06.841 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:07.100 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:07.100 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:07.100 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:07.100 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:07.100 [16/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:07.100 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:07.100 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:07.100 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:07.100 [20/37] Compiling C object samples/server.p/server.c.o 00:02:07.100 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:07.100 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:07.100 [23/37] Compiling C object samples/client.p/client.c.o 00:02:07.100 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:07.100 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:07.100 [26/37] Linking target samples/client 00:02:07.100 [27/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:07.100 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:07.100 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:07.100 [30/37] Linking target test/unit_tests 00:02:07.100 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:07.359 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:07.359 [33/37] Linking target samples/server 00:02:07.359 [34/37] Linking target samples/null 00:02:07.359 [35/37] Linking target samples/gpio-pci-idio-16 00:02:07.359 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:07.359 [37/37] Linking target samples/lspci 00:02:07.359 INFO: autodetecting backend as ninja 00:02:07.359 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:07.359 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:07.617 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:07.617 ninja: no work to do. 00:02:12.891 The Meson build system 00:02:12.891 Version: 1.5.0 00:02:12.891 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:12.891 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:12.891 Build type: native build 00:02:12.891 Program cat found: YES (/usr/bin/cat) 00:02:12.891 Project name: DPDK 00:02:12.891 Project version: 24.03.0 00:02:12.891 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:12.891 C linker for the host machine: cc ld.bfd 2.40-14 00:02:12.891 Host machine cpu family: x86_64 00:02:12.891 Host machine cpu: x86_64 00:02:12.891 Message: ## Building in Developer Mode ## 00:02:12.891 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.891 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:12.891 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.891 Program python3 found: YES (/usr/bin/python3) 00:02:12.891 Program cat found: YES (/usr/bin/cat) 00:02:12.891 Compiler for C supports arguments -march=native: YES 00:02:12.891 Checking for size of "void *" : 8 00:02:12.891 Checking for size of "void *" : 8 (cached) 00:02:12.891 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:12.891 Library m found: YES 00:02:12.891 Library numa found: YES 00:02:12.891 Has header "numaif.h" : YES 00:02:12.891 Library fdt found: NO 00:02:12.891 Library execinfo found: NO 00:02:12.891 Has header "execinfo.h" : YES 00:02:12.891 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.891 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.891 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.891 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.891 Run-time dependency openssl found: YES 3.1.1 00:02:12.891 Run-time dependency libpcap found: YES 1.10.4 00:02:12.891 Has header "pcap.h" with dependency libpcap: YES 00:02:12.891 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.891 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.891 Compiler for C supports arguments -Wformat: YES 00:02:12.891 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:12.891 Compiler for C supports arguments -Wformat-security: NO 00:02:12.891 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.891 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.891 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.891 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.891 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.891 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.891 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.891 Compiler for C supports arguments -Wundef: YES 00:02:12.891 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.891 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.891 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:12.891 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.891 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:12.891 Program objdump found: YES (/usr/bin/objdump) 00:02:12.891 Compiler for C supports arguments -mavx512f: YES 00:02:12.891 Checking if "AVX512 checking" compiles: YES 00:02:12.891 Fetching value of define "__SSE4_2__" : 1 00:02:12.891 Fetching value of define "__AES__" : 1 00:02:12.891 Fetching value of define "__AVX__" : 1 00:02:12.891 Fetching value of define "__AVX2__" : 1 00:02:12.891 Fetching value of define "__AVX512BW__" : 1 00:02:12.891 Fetching value of define "__AVX512CD__" : 1 00:02:12.891 Fetching value of define "__AVX512DQ__" : 1 00:02:12.891 Fetching value of define "__AVX512F__" : 1 00:02:12.891 Fetching value of define "__AVX512VL__" : 1 00:02:12.891 Fetching value of define "__PCLMUL__" : 1 00:02:12.891 Fetching value of define "__RDRND__" : 1 00:02:12.891 Fetching value of define "__RDSEED__" : 1 00:02:12.891 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:12.891 Fetching value of define "__znver1__" : (undefined) 00:02:12.891 Fetching value of define "__znver2__" : (undefined) 00:02:12.891 Fetching value of define "__znver3__" : (undefined) 00:02:12.891 Fetching value of define "__znver4__" : (undefined) 00:02:12.891 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:12.891 Message: lib/log: Defining dependency "log" 00:02:12.891 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.891 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.891 Checking for function "getentropy" : NO 00:02:12.891 Message: lib/eal: Defining dependency "eal" 00:02:12.891 Message: lib/ring: Defining dependency "ring" 00:02:12.891 Message: lib/rcu: Defining dependency "rcu" 00:02:12.891 Message: lib/mempool: Defining dependency "mempool" 00:02:12.891 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.891 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.891 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.891 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.891 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.891 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.891 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:12.891 Compiler for C supports arguments -mpclmul: YES 00:02:12.891 Compiler for C supports arguments -maes: YES 00:02:12.891 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.891 Compiler for C supports arguments -mavx512bw: YES 00:02:12.891 Compiler for C supports arguments -mavx512dq: YES 00:02:12.891 Compiler for C supports arguments -mavx512vl: YES 00:02:12.891 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.891 Compiler for C supports arguments -mavx2: YES 00:02:12.891 Compiler for C supports arguments -mavx: YES 00:02:12.891 Message: lib/net: Defining dependency "net" 00:02:12.891 Message: lib/meter: Defining dependency "meter" 00:02:12.891 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.891 Message: lib/pci: Defining dependency "pci" 00:02:12.891 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.892 Message: lib/hash: Defining dependency "hash" 00:02:12.892 Message: lib/timer: Defining dependency "timer" 00:02:12.892 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.892 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.892 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.892 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.892 Message: lib/power: Defining dependency "power" 00:02:12.892 Message: lib/reorder: Defining dependency "reorder" 00:02:12.892 Message: lib/security: Defining dependency "security" 00:02:12.892 Has header "linux/userfaultfd.h" : YES 00:02:12.892 Has header "linux/vduse.h" : YES 00:02:12.892 Message: lib/vhost: Defining dependency "vhost" 00:02:12.892 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.892 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.892 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.892 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.892 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:12.892 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:12.892 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:12.892 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:12.892 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:12.892 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:12.892 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.892 Configuring doxy-api-html.conf using configuration 00:02:12.892 Configuring doxy-api-man.conf using configuration 00:02:12.892 Program mandb found: YES (/usr/bin/mandb) 00:02:12.892 Program sphinx-build found: NO 00:02:12.892 Configuring rte_build_config.h using configuration 00:02:12.892 Message: 00:02:12.892 ================= 00:02:12.892 Applications Enabled 00:02:12.892 ================= 00:02:12.892 00:02:12.892 apps: 00:02:12.892 00:02:12.892 00:02:12.892 Message: 00:02:12.892 ================= 00:02:12.892 Libraries Enabled 00:02:12.892 ================= 00:02:12.892 00:02:12.892 libs: 00:02:12.892 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.892 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:12.892 cryptodev, dmadev, power, reorder, security, vhost, 00:02:12.892 00:02:12.892 Message: 00:02:12.892 =============== 00:02:12.892 Drivers Enabled 00:02:12.892 =============== 00:02:12.892 00:02:12.892 common: 00:02:12.892 00:02:12.892 bus: 00:02:12.892 pci, vdev, 00:02:12.892 mempool: 00:02:12.892 ring, 00:02:12.892 dma: 00:02:12.892 00:02:12.892 net: 00:02:12.892 00:02:12.892 crypto: 00:02:12.892 00:02:12.892 compress: 00:02:12.892 00:02:12.892 vdpa: 00:02:12.892 00:02:12.892 00:02:12.892 Message: 00:02:12.892 ================= 00:02:12.892 Content Skipped 00:02:12.892 ================= 00:02:12.892 00:02:12.892 apps: 00:02:12.892 dumpcap: explicitly disabled via build config 00:02:12.892 graph: explicitly disabled via build config 00:02:12.892 pdump: explicitly disabled via build config 00:02:12.892 proc-info: explicitly disabled via build config 00:02:12.892 test-acl: explicitly disabled via build config 00:02:12.892 test-bbdev: explicitly disabled via build config 00:02:12.892 test-cmdline: explicitly disabled via build config 00:02:12.892 test-compress-perf: explicitly disabled via build config 00:02:12.892 test-crypto-perf: explicitly disabled via build config 00:02:12.892 test-dma-perf: explicitly disabled via build config 00:02:12.892 test-eventdev: explicitly disabled via build config 00:02:12.892 test-fib: explicitly disabled via build config 00:02:12.892 test-flow-perf: explicitly disabled via build config 00:02:12.892 test-gpudev: explicitly disabled via build config 00:02:12.892 test-mldev: explicitly disabled via build config 00:02:12.892 test-pipeline: explicitly disabled via build config 00:02:12.892 test-pmd: explicitly disabled via build config 00:02:12.892 test-regex: explicitly disabled via build config 00:02:12.892 test-sad: explicitly disabled via build config 00:02:12.892 test-security-perf: explicitly disabled via build config 00:02:12.892 00:02:12.892 libs: 00:02:12.892 argparse: explicitly disabled via build config 00:02:12.892 metrics: explicitly disabled via build config 00:02:12.892 acl: explicitly disabled via build config 00:02:12.892 bbdev: explicitly disabled via build config 00:02:12.892 bitratestats: explicitly disabled via build config 00:02:12.892 bpf: explicitly disabled via build config 00:02:12.892 cfgfile: explicitly disabled via build config 00:02:12.892 distributor: explicitly disabled via build config 00:02:12.892 efd: explicitly disabled via build config 00:02:12.892 eventdev: explicitly disabled via build config 00:02:12.892 dispatcher: explicitly disabled via build config 00:02:12.892 gpudev: explicitly disabled via build config 00:02:12.892 gro: explicitly disabled via build config 00:02:12.892 gso: explicitly disabled via build config 00:02:12.892 ip_frag: explicitly disabled via build config 00:02:12.892 jobstats: explicitly disabled via build config 00:02:12.892 latencystats: explicitly disabled via build config 00:02:12.892 lpm: explicitly disabled via build config 00:02:12.892 member: explicitly disabled via build config 00:02:12.892 pcapng: explicitly disabled via build config 00:02:12.892 rawdev: explicitly disabled via build config 00:02:12.892 regexdev: explicitly disabled via build config 00:02:12.892 mldev: explicitly disabled via build config 00:02:12.892 rib: explicitly disabled via build config 00:02:12.892 sched: explicitly disabled via build config 00:02:12.892 stack: explicitly disabled via build config 00:02:12.892 ipsec: explicitly disabled via build config 00:02:12.892 pdcp: explicitly disabled via build config 00:02:12.892 fib: explicitly disabled via build config 00:02:12.892 port: explicitly disabled via build config 00:02:12.892 pdump: explicitly disabled via build config 00:02:12.892 table: explicitly disabled via build config 00:02:12.892 pipeline: explicitly disabled via build config 00:02:12.892 graph: explicitly disabled via build config 00:02:12.892 node: explicitly disabled via build config 00:02:12.892 00:02:12.892 drivers: 00:02:12.892 common/cpt: not in enabled drivers build config 00:02:12.892 common/dpaax: not in enabled drivers build config 00:02:12.892 common/iavf: not in enabled drivers build config 00:02:12.892 common/idpf: not in enabled drivers build config 00:02:12.892 common/ionic: not in enabled drivers build config 00:02:12.892 common/mvep: not in enabled drivers build config 00:02:12.892 common/octeontx: not in enabled drivers build config 00:02:12.892 bus/auxiliary: not in enabled drivers build config 00:02:12.892 bus/cdx: not in enabled drivers build config 00:02:12.892 bus/dpaa: not in enabled drivers build config 00:02:12.892 bus/fslmc: not in enabled drivers build config 00:02:12.892 bus/ifpga: not in enabled drivers build config 00:02:12.892 bus/platform: not in enabled drivers build config 00:02:12.892 bus/uacce: not in enabled drivers build config 00:02:12.892 bus/vmbus: not in enabled drivers build config 00:02:12.892 common/cnxk: not in enabled drivers build config 00:02:12.892 common/mlx5: not in enabled drivers build config 00:02:12.892 common/nfp: not in enabled drivers build config 00:02:12.892 common/nitrox: not in enabled drivers build config 00:02:12.892 common/qat: not in enabled drivers build config 00:02:12.892 common/sfc_efx: not in enabled drivers build config 00:02:12.892 mempool/bucket: not in enabled drivers build config 00:02:12.892 mempool/cnxk: not in enabled drivers build config 00:02:12.892 mempool/dpaa: not in enabled drivers build config 00:02:12.892 mempool/dpaa2: not in enabled drivers build config 00:02:12.892 mempool/octeontx: not in enabled drivers build config 00:02:12.892 mempool/stack: not in enabled drivers build config 00:02:12.892 dma/cnxk: not in enabled drivers build config 00:02:12.892 dma/dpaa: not in enabled drivers build config 00:02:12.892 dma/dpaa2: not in enabled drivers build config 00:02:12.892 dma/hisilicon: not in enabled drivers build config 00:02:12.892 dma/idxd: not in enabled drivers build config 00:02:12.892 dma/ioat: not in enabled drivers build config 00:02:12.892 dma/skeleton: not in enabled drivers build config 00:02:12.892 net/af_packet: not in enabled drivers build config 00:02:12.892 net/af_xdp: not in enabled drivers build config 00:02:12.892 net/ark: not in enabled drivers build config 00:02:12.892 net/atlantic: not in enabled drivers build config 00:02:12.892 net/avp: not in enabled drivers build config 00:02:12.892 net/axgbe: not in enabled drivers build config 00:02:12.892 net/bnx2x: not in enabled drivers build config 00:02:12.892 net/bnxt: not in enabled drivers build config 00:02:12.892 net/bonding: not in enabled drivers build config 00:02:12.892 net/cnxk: not in enabled drivers build config 00:02:12.892 net/cpfl: not in enabled drivers build config 00:02:12.892 net/cxgbe: not in enabled drivers build config 00:02:12.892 net/dpaa: not in enabled drivers build config 00:02:12.892 net/dpaa2: not in enabled drivers build config 00:02:12.892 net/e1000: not in enabled drivers build config 00:02:12.892 net/ena: not in enabled drivers build config 00:02:12.892 net/enetc: not in enabled drivers build config 00:02:12.892 net/enetfec: not in enabled drivers build config 00:02:12.892 net/enic: not in enabled drivers build config 00:02:12.892 net/failsafe: not in enabled drivers build config 00:02:12.892 net/fm10k: not in enabled drivers build config 00:02:12.892 net/gve: not in enabled drivers build config 00:02:12.892 net/hinic: not in enabled drivers build config 00:02:12.892 net/hns3: not in enabled drivers build config 00:02:12.892 net/i40e: not in enabled drivers build config 00:02:12.892 net/iavf: not in enabled drivers build config 00:02:12.892 net/ice: not in enabled drivers build config 00:02:12.892 net/idpf: not in enabled drivers build config 00:02:12.892 net/igc: not in enabled drivers build config 00:02:12.892 net/ionic: not in enabled drivers build config 00:02:12.892 net/ipn3ke: not in enabled drivers build config 00:02:12.892 net/ixgbe: not in enabled drivers build config 00:02:12.892 net/mana: not in enabled drivers build config 00:02:12.892 net/memif: not in enabled drivers build config 00:02:12.892 net/mlx4: not in enabled drivers build config 00:02:12.892 net/mlx5: not in enabled drivers build config 00:02:12.892 net/mvneta: not in enabled drivers build config 00:02:12.892 net/mvpp2: not in enabled drivers build config 00:02:12.892 net/netvsc: not in enabled drivers build config 00:02:12.892 net/nfb: not in enabled drivers build config 00:02:12.892 net/nfp: not in enabled drivers build config 00:02:12.892 net/ngbe: not in enabled drivers build config 00:02:12.893 net/null: not in enabled drivers build config 00:02:12.893 net/octeontx: not in enabled drivers build config 00:02:12.893 net/octeon_ep: not in enabled drivers build config 00:02:12.893 net/pcap: not in enabled drivers build config 00:02:12.893 net/pfe: not in enabled drivers build config 00:02:12.893 net/qede: not in enabled drivers build config 00:02:12.893 net/ring: not in enabled drivers build config 00:02:12.893 net/sfc: not in enabled drivers build config 00:02:12.893 net/softnic: not in enabled drivers build config 00:02:12.893 net/tap: not in enabled drivers build config 00:02:12.893 net/thunderx: not in enabled drivers build config 00:02:12.893 net/txgbe: not in enabled drivers build config 00:02:12.893 net/vdev_netvsc: not in enabled drivers build config 00:02:12.893 net/vhost: not in enabled drivers build config 00:02:12.893 net/virtio: not in enabled drivers build config 00:02:12.893 net/vmxnet3: not in enabled drivers build config 00:02:12.893 raw/*: missing internal dependency, "rawdev" 00:02:12.893 crypto/armv8: not in enabled drivers build config 00:02:12.893 crypto/bcmfs: not in enabled drivers build config 00:02:12.893 crypto/caam_jr: not in enabled drivers build config 00:02:12.893 crypto/ccp: not in enabled drivers build config 00:02:12.893 crypto/cnxk: not in enabled drivers build config 00:02:12.893 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.893 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.893 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.893 crypto/mlx5: not in enabled drivers build config 00:02:12.893 crypto/mvsam: not in enabled drivers build config 00:02:12.893 crypto/nitrox: not in enabled drivers build config 00:02:12.893 crypto/null: not in enabled drivers build config 00:02:12.893 crypto/octeontx: not in enabled drivers build config 00:02:12.893 crypto/openssl: not in enabled drivers build config 00:02:12.893 crypto/scheduler: not in enabled drivers build config 00:02:12.893 crypto/uadk: not in enabled drivers build config 00:02:12.893 crypto/virtio: not in enabled drivers build config 00:02:12.893 compress/isal: not in enabled drivers build config 00:02:12.893 compress/mlx5: not in enabled drivers build config 00:02:12.893 compress/nitrox: not in enabled drivers build config 00:02:12.893 compress/octeontx: not in enabled drivers build config 00:02:12.893 compress/zlib: not in enabled drivers build config 00:02:12.893 regex/*: missing internal dependency, "regexdev" 00:02:12.893 ml/*: missing internal dependency, "mldev" 00:02:12.893 vdpa/ifc: not in enabled drivers build config 00:02:12.893 vdpa/mlx5: not in enabled drivers build config 00:02:12.893 vdpa/nfp: not in enabled drivers build config 00:02:12.893 vdpa/sfc: not in enabled drivers build config 00:02:12.893 event/*: missing internal dependency, "eventdev" 00:02:12.893 baseband/*: missing internal dependency, "bbdev" 00:02:12.893 gpu/*: missing internal dependency, "gpudev" 00:02:12.893 00:02:12.893 00:02:13.158 Build targets in project: 85 00:02:13.158 00:02:13.158 DPDK 24.03.0 00:02:13.158 00:02:13.158 User defined options 00:02:13.158 buildtype : debug 00:02:13.158 default_library : shared 00:02:13.158 libdir : lib 00:02:13.158 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:13.158 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:13.158 c_link_args : 00:02:13.158 cpu_instruction_set: native 00:02:13.158 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:13.158 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:13.158 enable_docs : false 00:02:13.158 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:13.158 enable_kmods : false 00:02:13.158 max_lcores : 128 00:02:13.158 tests : false 00:02:13.158 00:02:13.158 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.426 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:13.426 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:13.689 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:13.689 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:13.689 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:13.689 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.689 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:13.689 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:13.689 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:13.689 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:13.689 [10/268] Linking static target lib/librte_kvargs.a 00:02:13.689 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:13.689 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.689 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:13.689 [14/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:13.689 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:13.689 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.689 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.689 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:13.689 [19/268] Linking static target lib/librte_log.a 00:02:13.948 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:13.949 [21/268] Linking static target lib/librte_pci.a 00:02:13.949 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:13.949 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:13.949 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:13.949 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:13.949 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:13.949 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:14.207 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:14.207 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:14.207 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.207 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:14.207 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.207 [33/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.207 [34/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.207 [35/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:14.207 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.207 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:14.208 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.208 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:14.208 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:14.208 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:14.208 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:14.208 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.208 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.208 [45/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.208 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.208 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:14.208 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:14.208 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:14.208 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.208 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.208 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:14.208 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.208 [54/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.208 [55/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:14.208 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:14.208 [57/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:14.208 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:14.208 [59/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.208 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.208 [61/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:14.208 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.208 [63/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:14.208 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:14.208 [65/268] Linking static target lib/librte_telemetry.a 00:02:14.208 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.208 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:14.208 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.208 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.208 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:14.208 [71/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.208 [72/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.208 [73/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.208 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:14.208 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:14.208 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.208 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.208 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:14.208 [79/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.208 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:14.208 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.208 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:14.208 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:14.208 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:14.208 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.208 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:14.208 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:14.208 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:14.208 [89/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.208 [90/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.208 [91/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.208 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:14.208 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.208 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:14.208 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:14.208 [96/268] Linking static target lib/librte_meter.a 00:02:14.208 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:14.208 [98/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:14.208 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.208 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:14.208 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:14.208 [102/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:14.467 [103/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:14.467 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:14.467 [105/268] Linking static target lib/librte_rcu.a 00:02:14.467 [106/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:14.467 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:14.467 [108/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:14.467 [109/268] Linking static target lib/librte_ring.a 00:02:14.467 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:14.467 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.467 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.467 [113/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:14.467 [114/268] Linking static target lib/librte_net.a 00:02:14.467 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.467 [116/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:14.467 [117/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.467 [118/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:14.467 [119/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:14.467 [120/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.467 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.467 [122/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.467 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:14.467 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:14.467 [125/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:14.467 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.467 [127/268] Linking static target lib/librte_mempool.a 00:02:14.467 [128/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.467 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:14.467 [130/268] Linking static target lib/librte_eal.a 00:02:14.467 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:14.467 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:14.467 [133/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:14.467 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:14.467 [135/268] Linking static target lib/librte_cmdline.a 00:02:14.467 [136/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:14.467 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.467 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.467 [139/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:14.467 [140/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:14.725 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:14.725 [142/268] Linking target lib/librte_log.so.24.1 00:02:14.725 [143/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.725 [144/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.725 [145/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.725 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:14.725 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:14.725 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:14.725 [149/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.725 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:14.725 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:14.725 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:14.725 [153/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:14.725 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:14.725 [155/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:14.725 [156/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:14.725 [157/268] Linking static target lib/librte_timer.a 00:02:14.725 [158/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.725 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:14.725 [160/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:14.725 [161/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.725 [162/268] Linking static target lib/librte_mbuf.a 00:02:14.725 [163/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:14.725 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:14.725 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:14.725 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:14.725 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:14.725 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:14.725 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:14.725 [170/268] Linking static target lib/librte_compressdev.a 00:02:14.725 [171/268] Linking static target lib/librte_dmadev.a 00:02:14.725 [172/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:14.725 [173/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:14.725 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:14.725 [175/268] Linking static target lib/librte_power.a 00:02:14.725 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:14.725 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.725 [178/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:14.725 [179/268] Linking target lib/librte_kvargs.so.24.1 00:02:14.725 [180/268] Linking static target lib/librte_reorder.a 00:02:14.725 [181/268] Linking target lib/librte_telemetry.so.24.1 00:02:14.725 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:14.725 [183/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:14.984 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:14.984 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:14.984 [186/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.984 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:14.984 [188/268] Linking static target lib/librte_security.a 00:02:14.984 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.984 [190/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:14.984 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:14.984 [192/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:14.984 [193/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:14.984 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:14.984 [195/268] Linking static target drivers/librte_bus_vdev.a 00:02:14.984 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:14.984 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:14.984 [198/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:14.984 [199/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:14.984 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:14.985 [201/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:14.985 [202/268] Linking static target lib/librte_hash.a 00:02:14.985 [203/268] Linking static target lib/librte_cryptodev.a 00:02:14.985 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:14.985 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.243 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.243 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.243 [208/268] Linking static target drivers/librte_bus_pci.a 00:02:15.243 [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.243 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:15.243 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.243 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.243 [213/268] Linking static target drivers/librte_mempool_ring.a 00:02:15.243 [214/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.243 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.243 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.502 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.502 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.502 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.502 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.502 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.502 [222/268] Linking static target lib/librte_ethdev.a 00:02:15.759 [223/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.759 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.759 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:16.019 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.019 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.956 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:16.956 [229/268] Linking static target lib/librte_vhost.a 00:02:16.956 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.861 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.137 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.708 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.708 [234/268] Linking target lib/librte_eal.so.24.1 00:02:24.708 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:24.708 [236/268] Linking target lib/librte_ring.so.24.1 00:02:24.708 [237/268] Linking target lib/librte_pci.so.24.1 00:02:24.708 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.708 [239/268] Linking target lib/librte_timer.so.24.1 00:02:24.708 [240/268] Linking target lib/librte_meter.so.24.1 00:02:24.708 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:24.966 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.966 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.967 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.967 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.967 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.967 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.967 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:24.967 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:25.225 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.225 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.225 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:25.225 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.225 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.484 [255/268] Linking target lib/librte_net.so.24.1 00:02:25.484 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:25.484 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:25.484 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:25.484 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.484 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.484 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:25.484 [262/268] Linking target lib/librte_hash.so.24.1 00:02:25.484 [263/268] Linking target lib/librte_security.so.24.1 00:02:25.484 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:25.743 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:25.743 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.743 [267/268] Linking target lib/librte_power.so.24.1 00:02:25.743 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:25.743 INFO: autodetecting backend as ninja 00:02:25.743 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:37.955 CC lib/ut/ut.o 00:02:37.955 CC lib/log/log.o 00:02:37.955 CC lib/log/log_flags.o 00:02:37.955 CC lib/ut_mock/mock.o 00:02:37.955 CC lib/log/log_deprecated.o 00:02:37.955 LIB libspdk_log.a 00:02:37.955 LIB libspdk_ut.a 00:02:37.955 LIB libspdk_ut_mock.a 00:02:37.955 SO libspdk_log.so.7.1 00:02:37.955 SO libspdk_ut.so.2.0 00:02:37.955 SO libspdk_ut_mock.so.6.0 00:02:37.955 SYMLINK libspdk_log.so 00:02:37.955 SYMLINK libspdk_ut.so 00:02:37.955 SYMLINK libspdk_ut_mock.so 00:02:37.955 CC lib/util/base64.o 00:02:37.955 CC lib/util/bit_array.o 00:02:37.955 CC lib/util/cpuset.o 00:02:37.955 CC lib/util/crc16.o 00:02:37.955 CC lib/util/crc32.o 00:02:37.955 CC lib/util/crc32c.o 00:02:37.955 CC lib/util/crc32_ieee.o 00:02:37.955 CC lib/util/crc64.o 00:02:37.955 CC lib/util/dif.o 00:02:37.955 CC lib/util/fd.o 00:02:37.955 CC lib/dma/dma.o 00:02:37.955 CC lib/util/fd_group.o 00:02:37.955 CC lib/util/file.o 00:02:37.955 CC lib/util/hexlify.o 00:02:37.955 CC lib/util/iov.o 00:02:37.955 CC lib/util/math.o 00:02:37.955 CXX lib/trace_parser/trace.o 00:02:37.955 CC lib/util/net.o 00:02:37.955 CC lib/util/pipe.o 00:02:37.955 CC lib/util/strerror_tls.o 00:02:37.955 CC lib/ioat/ioat.o 00:02:37.955 CC lib/util/string.o 00:02:37.955 CC lib/util/uuid.o 00:02:37.955 CC lib/util/xor.o 00:02:37.955 CC lib/util/md5.o 00:02:37.955 CC lib/util/zipf.o 00:02:37.955 CC lib/vfio_user/host/vfio_user_pci.o 00:02:37.955 CC lib/vfio_user/host/vfio_user.o 00:02:37.955 LIB libspdk_dma.a 00:02:37.955 SO libspdk_dma.so.5.0 00:02:37.955 LIB libspdk_ioat.a 00:02:37.955 SYMLINK libspdk_dma.so 00:02:37.955 SO libspdk_ioat.so.7.0 00:02:37.955 SYMLINK libspdk_ioat.so 00:02:37.955 LIB libspdk_vfio_user.a 00:02:37.955 SO libspdk_vfio_user.so.5.0 00:02:37.955 LIB libspdk_util.a 00:02:37.955 SYMLINK libspdk_vfio_user.so 00:02:37.955 SO libspdk_util.so.10.1 00:02:37.955 SYMLINK libspdk_util.so 00:02:37.955 LIB libspdk_trace_parser.a 00:02:37.955 SO libspdk_trace_parser.so.6.0 00:02:37.955 SYMLINK libspdk_trace_parser.so 00:02:37.955 CC lib/conf/conf.o 00:02:37.955 CC lib/env_dpdk/env.o 00:02:37.955 CC lib/rdma_utils/rdma_utils.o 00:02:37.955 CC lib/env_dpdk/memory.o 00:02:37.955 CC lib/env_dpdk/pci.o 00:02:37.955 CC lib/vmd/vmd.o 00:02:37.955 CC lib/idxd/idxd.o 00:02:37.955 CC lib/json/json_parse.o 00:02:37.955 CC lib/env_dpdk/init.o 00:02:37.955 CC lib/idxd/idxd_user.o 00:02:37.955 CC lib/vmd/led.o 00:02:37.955 CC lib/json/json_util.o 00:02:37.955 CC lib/env_dpdk/threads.o 00:02:37.955 CC lib/idxd/idxd_kernel.o 00:02:37.955 CC lib/json/json_write.o 00:02:37.955 CC lib/env_dpdk/pci_ioat.o 00:02:37.955 CC lib/env_dpdk/pci_virtio.o 00:02:37.955 CC lib/env_dpdk/pci_vmd.o 00:02:37.955 CC lib/env_dpdk/pci_idxd.o 00:02:37.955 CC lib/env_dpdk/pci_event.o 00:02:37.955 CC lib/env_dpdk/sigbus_handler.o 00:02:37.955 CC lib/env_dpdk/pci_dpdk.o 00:02:37.955 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:37.955 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:37.955 LIB libspdk_conf.a 00:02:37.955 SO libspdk_conf.so.6.0 00:02:37.955 LIB libspdk_rdma_utils.a 00:02:37.955 SYMLINK libspdk_conf.so 00:02:37.955 LIB libspdk_json.a 00:02:37.955 SO libspdk_rdma_utils.so.1.0 00:02:37.955 SO libspdk_json.so.6.0 00:02:37.955 SYMLINK libspdk_rdma_utils.so 00:02:37.955 SYMLINK libspdk_json.so 00:02:37.955 LIB libspdk_idxd.a 00:02:37.955 SO libspdk_idxd.so.12.1 00:02:37.955 LIB libspdk_vmd.a 00:02:37.955 SO libspdk_vmd.so.6.0 00:02:37.956 SYMLINK libspdk_idxd.so 00:02:38.213 SYMLINK libspdk_vmd.so 00:02:38.213 CC lib/rdma_provider/common.o 00:02:38.213 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:38.213 CC lib/jsonrpc/jsonrpc_server.o 00:02:38.213 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:38.213 CC lib/jsonrpc/jsonrpc_client.o 00:02:38.213 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:38.472 LIB libspdk_rdma_provider.a 00:02:38.472 SO libspdk_rdma_provider.so.7.0 00:02:38.472 LIB libspdk_jsonrpc.a 00:02:38.472 SO libspdk_jsonrpc.so.6.0 00:02:38.472 SYMLINK libspdk_rdma_provider.so 00:02:38.472 SYMLINK libspdk_jsonrpc.so 00:02:38.472 LIB libspdk_env_dpdk.a 00:02:38.731 SO libspdk_env_dpdk.so.15.1 00:02:38.731 SYMLINK libspdk_env_dpdk.so 00:02:38.990 CC lib/rpc/rpc.o 00:02:38.990 LIB libspdk_rpc.a 00:02:38.990 SO libspdk_rpc.so.6.0 00:02:39.250 SYMLINK libspdk_rpc.so 00:02:39.510 CC lib/notify/notify.o 00:02:39.510 CC lib/notify/notify_rpc.o 00:02:39.510 CC lib/trace/trace.o 00:02:39.510 CC lib/trace/trace_flags.o 00:02:39.510 CC lib/trace/trace_rpc.o 00:02:39.510 CC lib/keyring/keyring.o 00:02:39.510 CC lib/keyring/keyring_rpc.o 00:02:39.769 LIB libspdk_notify.a 00:02:39.769 SO libspdk_notify.so.6.0 00:02:39.769 LIB libspdk_keyring.a 00:02:39.769 LIB libspdk_trace.a 00:02:39.769 SO libspdk_keyring.so.2.0 00:02:39.769 SYMLINK libspdk_notify.so 00:02:39.769 SO libspdk_trace.so.11.0 00:02:39.769 SYMLINK libspdk_keyring.so 00:02:39.769 SYMLINK libspdk_trace.so 00:02:40.338 CC lib/thread/thread.o 00:02:40.338 CC lib/sock/sock.o 00:02:40.338 CC lib/thread/iobuf.o 00:02:40.338 CC lib/sock/sock_rpc.o 00:02:40.596 LIB libspdk_sock.a 00:02:40.596 SO libspdk_sock.so.10.0 00:02:40.596 SYMLINK libspdk_sock.so 00:02:40.854 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:40.854 CC lib/nvme/nvme_ctrlr.o 00:02:40.854 CC lib/nvme/nvme_fabric.o 00:02:40.854 CC lib/nvme/nvme_ns_cmd.o 00:02:40.854 CC lib/nvme/nvme_ns.o 00:02:40.854 CC lib/nvme/nvme_pcie_common.o 00:02:40.854 CC lib/nvme/nvme_pcie.o 00:02:40.854 CC lib/nvme/nvme_qpair.o 00:02:40.854 CC lib/nvme/nvme.o 00:02:40.854 CC lib/nvme/nvme_quirks.o 00:02:40.854 CC lib/nvme/nvme_transport.o 00:02:40.854 CC lib/nvme/nvme_discovery.o 00:02:40.854 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:40.854 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:40.854 CC lib/nvme/nvme_tcp.o 00:02:40.854 CC lib/nvme/nvme_opal.o 00:02:40.854 CC lib/nvme/nvme_io_msg.o 00:02:40.854 CC lib/nvme/nvme_poll_group.o 00:02:40.854 CC lib/nvme/nvme_zns.o 00:02:40.854 CC lib/nvme/nvme_stubs.o 00:02:40.854 CC lib/nvme/nvme_auth.o 00:02:40.854 CC lib/nvme/nvme_cuse.o 00:02:40.854 CC lib/nvme/nvme_vfio_user.o 00:02:40.854 CC lib/nvme/nvme_rdma.o 00:02:41.420 LIB libspdk_thread.a 00:02:41.420 SO libspdk_thread.so.11.0 00:02:41.420 SYMLINK libspdk_thread.so 00:02:41.679 CC lib/vfu_tgt/tgt_endpoint.o 00:02:41.679 CC lib/vfu_tgt/tgt_rpc.o 00:02:41.679 CC lib/virtio/virtio.o 00:02:41.679 CC lib/virtio/virtio_vhost_user.o 00:02:41.679 CC lib/virtio/virtio_vfio_user.o 00:02:41.679 CC lib/virtio/virtio_pci.o 00:02:41.679 CC lib/init/json_config.o 00:02:41.679 CC lib/init/subsystem_rpc.o 00:02:41.679 CC lib/init/subsystem.o 00:02:41.679 CC lib/init/rpc.o 00:02:41.679 CC lib/accel/accel.o 00:02:41.679 CC lib/accel/accel_rpc.o 00:02:41.679 CC lib/accel/accel_sw.o 00:02:41.679 CC lib/blob/blobstore.o 00:02:41.679 CC lib/blob/request.o 00:02:41.679 CC lib/blob/zeroes.o 00:02:41.679 CC lib/fsdev/fsdev.o 00:02:41.679 CC lib/blob/blob_bs_dev.o 00:02:41.679 CC lib/fsdev/fsdev_io.o 00:02:41.679 CC lib/fsdev/fsdev_rpc.o 00:02:41.936 LIB libspdk_init.a 00:02:41.936 SO libspdk_init.so.6.0 00:02:41.936 LIB libspdk_vfu_tgt.a 00:02:41.936 LIB libspdk_virtio.a 00:02:41.936 SO libspdk_vfu_tgt.so.3.0 00:02:41.936 SO libspdk_virtio.so.7.0 00:02:41.936 SYMLINK libspdk_init.so 00:02:42.195 SYMLINK libspdk_vfu_tgt.so 00:02:42.195 SYMLINK libspdk_virtio.so 00:02:42.195 LIB libspdk_fsdev.a 00:02:42.195 SO libspdk_fsdev.so.2.0 00:02:42.454 SYMLINK libspdk_fsdev.so 00:02:42.454 CC lib/event/app.o 00:02:42.454 CC lib/event/reactor.o 00:02:42.454 CC lib/event/log_rpc.o 00:02:42.454 CC lib/event/app_rpc.o 00:02:42.454 CC lib/event/scheduler_static.o 00:02:42.454 LIB libspdk_accel.a 00:02:42.454 SO libspdk_accel.so.16.0 00:02:42.713 LIB libspdk_nvme.a 00:02:42.713 SYMLINK libspdk_accel.so 00:02:42.713 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:42.713 LIB libspdk_event.a 00:02:42.713 SO libspdk_nvme.so.15.0 00:02:42.713 SO libspdk_event.so.14.0 00:02:42.713 SYMLINK libspdk_event.so 00:02:42.971 SYMLINK libspdk_nvme.so 00:02:42.971 CC lib/bdev/bdev.o 00:02:42.971 CC lib/bdev/bdev_rpc.o 00:02:42.971 CC lib/bdev/bdev_zone.o 00:02:42.971 CC lib/bdev/part.o 00:02:42.971 CC lib/bdev/scsi_nvme.o 00:02:43.229 LIB libspdk_fuse_dispatcher.a 00:02:43.229 SO libspdk_fuse_dispatcher.so.1.0 00:02:43.229 SYMLINK libspdk_fuse_dispatcher.so 00:02:43.797 LIB libspdk_blob.a 00:02:44.056 SO libspdk_blob.so.12.0 00:02:44.056 SYMLINK libspdk_blob.so 00:02:44.314 CC lib/blobfs/blobfs.o 00:02:44.314 CC lib/blobfs/tree.o 00:02:44.314 CC lib/lvol/lvol.o 00:02:44.896 LIB libspdk_bdev.a 00:02:44.896 SO libspdk_bdev.so.17.0 00:02:44.896 LIB libspdk_blobfs.a 00:02:44.896 SYMLINK libspdk_bdev.so 00:02:44.896 SO libspdk_blobfs.so.11.0 00:02:44.896 LIB libspdk_lvol.a 00:02:45.174 SYMLINK libspdk_blobfs.so 00:02:45.174 SO libspdk_lvol.so.11.0 00:02:45.174 SYMLINK libspdk_lvol.so 00:02:45.174 CC lib/scsi/dev.o 00:02:45.174 CC lib/scsi/lun.o 00:02:45.174 CC lib/scsi/port.o 00:02:45.174 CC lib/scsi/scsi.o 00:02:45.174 CC lib/scsi/scsi_bdev.o 00:02:45.174 CC lib/scsi/scsi_pr.o 00:02:45.174 CC lib/scsi/scsi_rpc.o 00:02:45.174 CC lib/scsi/task.o 00:02:45.174 CC lib/nvmf/ctrlr.o 00:02:45.174 CC lib/nvmf/ctrlr_discovery.o 00:02:45.174 CC lib/nbd/nbd.o 00:02:45.174 CC lib/nvmf/ctrlr_bdev.o 00:02:45.174 CC lib/nvmf/subsystem.o 00:02:45.174 CC lib/nbd/nbd_rpc.o 00:02:45.174 CC lib/ftl/ftl_core.o 00:02:45.174 CC lib/ublk/ublk.o 00:02:45.174 CC lib/nvmf/nvmf.o 00:02:45.476 CC lib/ftl/ftl_init.o 00:02:45.476 CC lib/nvmf/nvmf_rpc.o 00:02:45.476 CC lib/ublk/ublk_rpc.o 00:02:45.476 CC lib/ftl/ftl_layout.o 00:02:45.476 CC lib/ftl/ftl_debug.o 00:02:45.476 CC lib/nvmf/transport.o 00:02:45.476 CC lib/ftl/ftl_io.o 00:02:45.476 CC lib/nvmf/tcp.o 00:02:45.476 CC lib/ftl/ftl_sb.o 00:02:45.476 CC lib/nvmf/stubs.o 00:02:45.476 CC lib/nvmf/mdns_server.o 00:02:45.476 CC lib/ftl/ftl_l2p.o 00:02:45.476 CC lib/ftl/ftl_l2p_flat.o 00:02:45.476 CC lib/nvmf/vfio_user.o 00:02:45.476 CC lib/ftl/ftl_nv_cache.o 00:02:45.476 CC lib/nvmf/rdma.o 00:02:45.476 CC lib/ftl/ftl_band.o 00:02:45.476 CC lib/nvmf/auth.o 00:02:45.476 CC lib/ftl/ftl_band_ops.o 00:02:45.476 CC lib/ftl/ftl_writer.o 00:02:45.476 CC lib/ftl/ftl_rq.o 00:02:45.476 CC lib/ftl/ftl_reloc.o 00:02:45.476 CC lib/ftl/ftl_l2p_cache.o 00:02:45.476 CC lib/ftl/ftl_p2l_log.o 00:02:45.476 CC lib/ftl/ftl_p2l.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:45.476 CC lib/ftl/utils/ftl_conf.o 00:02:45.476 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:45.476 CC lib/ftl/utils/ftl_md.o 00:02:45.476 CC lib/ftl/utils/ftl_mempool.o 00:02:45.476 CC lib/ftl/utils/ftl_bitmap.o 00:02:45.476 CC lib/ftl/utils/ftl_property.o 00:02:45.476 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:45.476 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:45.476 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:45.476 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:45.476 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:45.476 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:45.476 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:45.476 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:45.476 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:45.476 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:45.476 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:45.476 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:45.476 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:45.476 CC lib/ftl/base/ftl_base_dev.o 00:02:45.476 CC lib/ftl/ftl_trace.o 00:02:45.476 CC lib/ftl/base/ftl_base_bdev.o 00:02:46.067 LIB libspdk_scsi.a 00:02:46.067 LIB libspdk_nbd.a 00:02:46.067 SO libspdk_nbd.so.7.0 00:02:46.067 SO libspdk_scsi.so.9.0 00:02:46.067 LIB libspdk_ublk.a 00:02:46.067 SYMLINK libspdk_nbd.so 00:02:46.067 SYMLINK libspdk_scsi.so 00:02:46.067 SO libspdk_ublk.so.3.0 00:02:46.067 SYMLINK libspdk_ublk.so 00:02:46.327 LIB libspdk_ftl.a 00:02:46.327 CC lib/iscsi/conn.o 00:02:46.327 CC lib/iscsi/init_grp.o 00:02:46.327 CC lib/iscsi/iscsi.o 00:02:46.327 CC lib/vhost/vhost.o 00:02:46.327 CC lib/iscsi/param.o 00:02:46.327 CC lib/vhost/vhost_rpc.o 00:02:46.327 CC lib/iscsi/tgt_node.o 00:02:46.327 CC lib/iscsi/portal_grp.o 00:02:46.327 CC lib/vhost/vhost_scsi.o 00:02:46.327 CC lib/vhost/vhost_blk.o 00:02:46.327 CC lib/iscsi/iscsi_subsystem.o 00:02:46.327 CC lib/vhost/rte_vhost_user.o 00:02:46.327 CC lib/iscsi/iscsi_rpc.o 00:02:46.327 CC lib/iscsi/task.o 00:02:46.586 SO libspdk_ftl.so.9.0 00:02:46.844 SYMLINK libspdk_ftl.so 00:02:47.103 LIB libspdk_nvmf.a 00:02:47.103 SO libspdk_nvmf.so.20.0 00:02:47.362 SYMLINK libspdk_nvmf.so 00:02:47.362 LIB libspdk_vhost.a 00:02:47.362 SO libspdk_vhost.so.8.0 00:02:47.362 SYMLINK libspdk_vhost.so 00:02:47.362 LIB libspdk_iscsi.a 00:02:47.621 SO libspdk_iscsi.so.8.0 00:02:47.621 SYMLINK libspdk_iscsi.so 00:02:48.188 CC module/env_dpdk/env_dpdk_rpc.o 00:02:48.188 CC module/vfu_device/vfu_virtio_blk.o 00:02:48.188 CC module/vfu_device/vfu_virtio.o 00:02:48.188 CC module/vfu_device/vfu_virtio_scsi.o 00:02:48.188 CC module/vfu_device/vfu_virtio_rpc.o 00:02:48.188 CC module/vfu_device/vfu_virtio_fs.o 00:02:48.446 CC module/accel/ioat/accel_ioat.o 00:02:48.446 CC module/accel/ioat/accel_ioat_rpc.o 00:02:48.446 LIB libspdk_env_dpdk_rpc.a 00:02:48.446 CC module/accel/error/accel_error.o 00:02:48.446 CC module/accel/dsa/accel_dsa.o 00:02:48.446 CC module/accel/dsa/accel_dsa_rpc.o 00:02:48.446 CC module/accel/error/accel_error_rpc.o 00:02:48.446 CC module/keyring/linux/keyring.o 00:02:48.446 CC module/keyring/linux/keyring_rpc.o 00:02:48.446 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:48.446 CC module/accel/iaa/accel_iaa.o 00:02:48.446 CC module/blob/bdev/blob_bdev.o 00:02:48.446 CC module/accel/iaa/accel_iaa_rpc.o 00:02:48.446 CC module/sock/posix/posix.o 00:02:48.446 CC module/scheduler/gscheduler/gscheduler.o 00:02:48.446 CC module/fsdev/aio/fsdev_aio.o 00:02:48.446 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:48.446 CC module/fsdev/aio/linux_aio_mgr.o 00:02:48.446 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:48.446 CC module/keyring/file/keyring.o 00:02:48.446 CC module/keyring/file/keyring_rpc.o 00:02:48.446 SO libspdk_env_dpdk_rpc.so.6.0 00:02:48.446 SYMLINK libspdk_env_dpdk_rpc.so 00:02:48.446 LIB libspdk_keyring_linux.a 00:02:48.446 LIB libspdk_scheduler_dpdk_governor.a 00:02:48.446 LIB libspdk_keyring_file.a 00:02:48.446 LIB libspdk_scheduler_gscheduler.a 00:02:48.446 LIB libspdk_accel_ioat.a 00:02:48.446 SO libspdk_keyring_linux.so.1.0 00:02:48.446 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:48.446 SO libspdk_keyring_file.so.2.0 00:02:48.446 LIB libspdk_accel_error.a 00:02:48.446 SO libspdk_accel_ioat.so.6.0 00:02:48.446 SO libspdk_scheduler_gscheduler.so.4.0 00:02:48.446 LIB libspdk_accel_iaa.a 00:02:48.707 LIB libspdk_scheduler_dynamic.a 00:02:48.707 SO libspdk_accel_error.so.2.0 00:02:48.707 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:48.707 SYMLINK libspdk_keyring_file.so 00:02:48.707 SO libspdk_scheduler_dynamic.so.4.0 00:02:48.707 SYMLINK libspdk_keyring_linux.so 00:02:48.707 SO libspdk_accel_iaa.so.3.0 00:02:48.707 SYMLINK libspdk_accel_ioat.so 00:02:48.707 SYMLINK libspdk_scheduler_gscheduler.so 00:02:48.707 LIB libspdk_accel_dsa.a 00:02:48.707 SYMLINK libspdk_accel_error.so 00:02:48.707 LIB libspdk_blob_bdev.a 00:02:48.707 SYMLINK libspdk_scheduler_dynamic.so 00:02:48.707 SO libspdk_blob_bdev.so.12.0 00:02:48.707 SO libspdk_accel_dsa.so.5.0 00:02:48.707 SYMLINK libspdk_accel_iaa.so 00:02:48.707 SYMLINK libspdk_blob_bdev.so 00:02:48.707 LIB libspdk_vfu_device.a 00:02:48.707 SYMLINK libspdk_accel_dsa.so 00:02:48.707 SO libspdk_vfu_device.so.3.0 00:02:48.966 SYMLINK libspdk_vfu_device.so 00:02:48.966 LIB libspdk_fsdev_aio.a 00:02:48.966 SO libspdk_fsdev_aio.so.1.0 00:02:48.966 LIB libspdk_sock_posix.a 00:02:48.966 SO libspdk_sock_posix.so.6.0 00:02:48.966 SYMLINK libspdk_fsdev_aio.so 00:02:48.966 SYMLINK libspdk_sock_posix.so 00:02:49.224 CC module/bdev/error/vbdev_error.o 00:02:49.224 CC module/bdev/error/vbdev_error_rpc.o 00:02:49.224 CC module/bdev/delay/vbdev_delay.o 00:02:49.224 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:49.224 CC module/bdev/raid/bdev_raid.o 00:02:49.224 CC module/bdev/split/vbdev_split.o 00:02:49.224 CC module/bdev/raid/bdev_raid_rpc.o 00:02:49.224 CC module/bdev/split/vbdev_split_rpc.o 00:02:49.224 CC module/bdev/raid/bdev_raid_sb.o 00:02:49.224 CC module/bdev/malloc/bdev_malloc.o 00:02:49.224 CC module/bdev/raid/raid0.o 00:02:49.224 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:49.224 CC module/bdev/raid/raid1.o 00:02:49.224 CC module/bdev/gpt/gpt.o 00:02:49.224 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:49.224 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:49.224 CC module/bdev/raid/concat.o 00:02:49.224 CC module/bdev/lvol/vbdev_lvol.o 00:02:49.224 CC module/bdev/gpt/vbdev_gpt.o 00:02:49.224 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:49.224 CC module/bdev/passthru/vbdev_passthru.o 00:02:49.224 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:49.224 CC module/bdev/nvme/bdev_nvme.o 00:02:49.224 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:49.224 CC module/bdev/nvme/nvme_rpc.o 00:02:49.224 CC module/blobfs/bdev/blobfs_bdev.o 00:02:49.224 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:49.224 CC module/bdev/nvme/bdev_mdns_client.o 00:02:49.224 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:49.224 CC module/bdev/aio/bdev_aio.o 00:02:49.224 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:49.224 CC module/bdev/iscsi/bdev_iscsi.o 00:02:49.224 CC module/bdev/nvme/vbdev_opal.o 00:02:49.224 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:49.224 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:49.224 CC module/bdev/ftl/bdev_ftl.o 00:02:49.224 CC module/bdev/aio/bdev_aio_rpc.o 00:02:49.224 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:49.224 CC module/bdev/null/bdev_null.o 00:02:49.224 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:49.224 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:49.224 CC module/bdev/null/bdev_null_rpc.o 00:02:49.484 LIB libspdk_bdev_split.a 00:02:49.484 LIB libspdk_blobfs_bdev.a 00:02:49.484 SO libspdk_bdev_split.so.6.0 00:02:49.484 LIB libspdk_bdev_null.a 00:02:49.484 SO libspdk_blobfs_bdev.so.6.0 00:02:49.484 LIB libspdk_bdev_error.a 00:02:49.484 LIB libspdk_bdev_ftl.a 00:02:49.484 SO libspdk_bdev_error.so.6.0 00:02:49.484 SO libspdk_bdev_null.so.6.0 00:02:49.484 LIB libspdk_bdev_gpt.a 00:02:49.484 SYMLINK libspdk_bdev_split.so 00:02:49.484 SO libspdk_bdev_ftl.so.6.0 00:02:49.484 SYMLINK libspdk_blobfs_bdev.so 00:02:49.484 LIB libspdk_bdev_zone_block.a 00:02:49.743 LIB libspdk_bdev_delay.a 00:02:49.743 LIB libspdk_bdev_malloc.a 00:02:49.743 LIB libspdk_bdev_passthru.a 00:02:49.743 SO libspdk_bdev_gpt.so.6.0 00:02:49.743 SO libspdk_bdev_zone_block.so.6.0 00:02:49.743 SYMLINK libspdk_bdev_error.so 00:02:49.743 LIB libspdk_bdev_iscsi.a 00:02:49.743 SO libspdk_bdev_delay.so.6.0 00:02:49.743 SYMLINK libspdk_bdev_null.so 00:02:49.743 LIB libspdk_bdev_aio.a 00:02:49.743 SO libspdk_bdev_passthru.so.6.0 00:02:49.743 SYMLINK libspdk_bdev_ftl.so 00:02:49.743 SO libspdk_bdev_malloc.so.6.0 00:02:49.743 SO libspdk_bdev_iscsi.so.6.0 00:02:49.743 SYMLINK libspdk_bdev_zone_block.so 00:02:49.743 SYMLINK libspdk_bdev_gpt.so 00:02:49.743 SO libspdk_bdev_aio.so.6.0 00:02:49.743 SYMLINK libspdk_bdev_malloc.so 00:02:49.743 SYMLINK libspdk_bdev_delay.so 00:02:49.743 SYMLINK libspdk_bdev_passthru.so 00:02:49.743 SYMLINK libspdk_bdev_aio.so 00:02:49.743 SYMLINK libspdk_bdev_iscsi.so 00:02:49.743 LIB libspdk_bdev_lvol.a 00:02:49.743 LIB libspdk_bdev_virtio.a 00:02:49.743 SO libspdk_bdev_lvol.so.6.0 00:02:49.743 SO libspdk_bdev_virtio.so.6.0 00:02:50.001 SYMLINK libspdk_bdev_lvol.so 00:02:50.001 SYMLINK libspdk_bdev_virtio.so 00:02:50.001 LIB libspdk_bdev_raid.a 00:02:50.001 SO libspdk_bdev_raid.so.6.0 00:02:50.261 SYMLINK libspdk_bdev_raid.so 00:02:51.198 LIB libspdk_bdev_nvme.a 00:02:51.198 SO libspdk_bdev_nvme.so.7.1 00:02:51.198 SYMLINK libspdk_bdev_nvme.so 00:02:52.134 CC module/event/subsystems/iobuf/iobuf.o 00:02:52.134 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:52.134 CC module/event/subsystems/sock/sock.o 00:02:52.134 CC module/event/subsystems/keyring/keyring.o 00:02:52.134 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:52.134 CC module/event/subsystems/vmd/vmd.o 00:02:52.134 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:52.134 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:52.134 CC module/event/subsystems/scheduler/scheduler.o 00:02:52.134 CC module/event/subsystems/fsdev/fsdev.o 00:02:52.134 LIB libspdk_event_vhost_blk.a 00:02:52.134 LIB libspdk_event_fsdev.a 00:02:52.134 LIB libspdk_event_vmd.a 00:02:52.134 LIB libspdk_event_keyring.a 00:02:52.134 LIB libspdk_event_vfu_tgt.a 00:02:52.134 LIB libspdk_event_iobuf.a 00:02:52.134 LIB libspdk_event_sock.a 00:02:52.134 LIB libspdk_event_scheduler.a 00:02:52.134 SO libspdk_event_vhost_blk.so.3.0 00:02:52.134 SO libspdk_event_fsdev.so.1.0 00:02:52.134 SO libspdk_event_sock.so.5.0 00:02:52.134 SO libspdk_event_scheduler.so.4.0 00:02:52.134 SO libspdk_event_keyring.so.1.0 00:02:52.134 SO libspdk_event_vfu_tgt.so.3.0 00:02:52.134 SO libspdk_event_vmd.so.6.0 00:02:52.134 SO libspdk_event_iobuf.so.3.0 00:02:52.134 SYMLINK libspdk_event_vhost_blk.so 00:02:52.134 SYMLINK libspdk_event_fsdev.so 00:02:52.134 SYMLINK libspdk_event_sock.so 00:02:52.134 SYMLINK libspdk_event_keyring.so 00:02:52.134 SYMLINK libspdk_event_vmd.so 00:02:52.134 SYMLINK libspdk_event_vfu_tgt.so 00:02:52.134 SYMLINK libspdk_event_scheduler.so 00:02:52.134 SYMLINK libspdk_event_iobuf.so 00:02:52.704 CC module/event/subsystems/accel/accel.o 00:02:52.704 LIB libspdk_event_accel.a 00:02:52.704 SO libspdk_event_accel.so.6.0 00:02:52.704 SYMLINK libspdk_event_accel.so 00:02:53.273 CC module/event/subsystems/bdev/bdev.o 00:02:53.273 LIB libspdk_event_bdev.a 00:02:53.273 SO libspdk_event_bdev.so.6.0 00:02:53.273 SYMLINK libspdk_event_bdev.so 00:02:53.842 CC module/event/subsystems/ublk/ublk.o 00:02:53.842 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:53.842 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:53.842 CC module/event/subsystems/scsi/scsi.o 00:02:53.842 CC module/event/subsystems/nbd/nbd.o 00:02:53.842 LIB libspdk_event_ublk.a 00:02:53.842 LIB libspdk_event_nbd.a 00:02:53.842 LIB libspdk_event_scsi.a 00:02:53.842 SO libspdk_event_nbd.so.6.0 00:02:53.842 SO libspdk_event_ublk.so.3.0 00:02:53.842 SO libspdk_event_scsi.so.6.0 00:02:53.842 LIB libspdk_event_nvmf.a 00:02:53.842 SYMLINK libspdk_event_nbd.so 00:02:53.842 SYMLINK libspdk_event_scsi.so 00:02:53.842 SO libspdk_event_nvmf.so.6.0 00:02:53.842 SYMLINK libspdk_event_ublk.so 00:02:54.101 SYMLINK libspdk_event_nvmf.so 00:02:54.360 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:54.360 CC module/event/subsystems/iscsi/iscsi.o 00:02:54.360 LIB libspdk_event_vhost_scsi.a 00:02:54.360 LIB libspdk_event_iscsi.a 00:02:54.360 SO libspdk_event_vhost_scsi.so.3.0 00:02:54.619 SO libspdk_event_iscsi.so.6.0 00:02:54.619 SYMLINK libspdk_event_vhost_scsi.so 00:02:54.619 SYMLINK libspdk_event_iscsi.so 00:02:54.619 SO libspdk.so.6.0 00:02:54.619 SYMLINK libspdk.so 00:02:55.201 CXX app/trace/trace.o 00:02:55.202 CC app/trace_record/trace_record.o 00:02:55.202 CC app/spdk_lspci/spdk_lspci.o 00:02:55.202 CC app/spdk_nvme_discover/discovery_aer.o 00:02:55.202 CC app/spdk_nvme_perf/perf.o 00:02:55.202 CC test/rpc_client/rpc_client_test.o 00:02:55.202 CC app/spdk_nvme_identify/identify.o 00:02:55.202 CC app/spdk_top/spdk_top.o 00:02:55.202 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:55.202 TEST_HEADER include/spdk/accel_module.h 00:02:55.202 TEST_HEADER include/spdk/accel.h 00:02:55.202 TEST_HEADER include/spdk/barrier.h 00:02:55.202 TEST_HEADER include/spdk/base64.h 00:02:55.202 TEST_HEADER include/spdk/assert.h 00:02:55.202 TEST_HEADER include/spdk/bdev.h 00:02:55.202 TEST_HEADER include/spdk/bdev_zone.h 00:02:55.202 TEST_HEADER include/spdk/bdev_module.h 00:02:55.202 TEST_HEADER include/spdk/bit_array.h 00:02:55.202 TEST_HEADER include/spdk/bit_pool.h 00:02:55.202 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:55.202 TEST_HEADER include/spdk/blob_bdev.h 00:02:55.202 TEST_HEADER include/spdk/blobfs.h 00:02:55.202 TEST_HEADER include/spdk/blob.h 00:02:55.202 TEST_HEADER include/spdk/conf.h 00:02:55.202 TEST_HEADER include/spdk/cpuset.h 00:02:55.202 TEST_HEADER include/spdk/config.h 00:02:55.202 TEST_HEADER include/spdk/crc16.h 00:02:55.202 TEST_HEADER include/spdk/crc32.h 00:02:55.202 TEST_HEADER include/spdk/crc64.h 00:02:55.202 TEST_HEADER include/spdk/dif.h 00:02:55.202 TEST_HEADER include/spdk/endian.h 00:02:55.202 TEST_HEADER include/spdk/dma.h 00:02:55.202 TEST_HEADER include/spdk/env_dpdk.h 00:02:55.202 TEST_HEADER include/spdk/env.h 00:02:55.202 TEST_HEADER include/spdk/fd_group.h 00:02:55.202 TEST_HEADER include/spdk/fd.h 00:02:55.202 TEST_HEADER include/spdk/event.h 00:02:55.202 TEST_HEADER include/spdk/fsdev.h 00:02:55.202 TEST_HEADER include/spdk/file.h 00:02:55.202 TEST_HEADER include/spdk/fsdev_module.h 00:02:55.202 TEST_HEADER include/spdk/ftl.h 00:02:55.202 TEST_HEADER include/spdk/hexlify.h 00:02:55.202 TEST_HEADER include/spdk/gpt_spec.h 00:02:55.202 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:55.202 TEST_HEADER include/spdk/idxd_spec.h 00:02:55.202 TEST_HEADER include/spdk/idxd.h 00:02:55.202 TEST_HEADER include/spdk/histogram_data.h 00:02:55.202 TEST_HEADER include/spdk/init.h 00:02:55.202 TEST_HEADER include/spdk/ioat.h 00:02:55.202 TEST_HEADER include/spdk/ioat_spec.h 00:02:55.202 TEST_HEADER include/spdk/json.h 00:02:55.202 TEST_HEADER include/spdk/iscsi_spec.h 00:02:55.202 TEST_HEADER include/spdk/likely.h 00:02:55.202 TEST_HEADER include/spdk/keyring.h 00:02:55.202 TEST_HEADER include/spdk/jsonrpc.h 00:02:55.202 TEST_HEADER include/spdk/keyring_module.h 00:02:55.202 TEST_HEADER include/spdk/log.h 00:02:55.202 CC app/spdk_dd/spdk_dd.o 00:02:55.202 TEST_HEADER include/spdk/lvol.h 00:02:55.202 CC app/iscsi_tgt/iscsi_tgt.o 00:02:55.202 TEST_HEADER include/spdk/mmio.h 00:02:55.202 TEST_HEADER include/spdk/memory.h 00:02:55.202 TEST_HEADER include/spdk/md5.h 00:02:55.202 TEST_HEADER include/spdk/nbd.h 00:02:55.202 CC app/nvmf_tgt/nvmf_main.o 00:02:55.202 TEST_HEADER include/spdk/nvme.h 00:02:55.202 TEST_HEADER include/spdk/net.h 00:02:55.202 TEST_HEADER include/spdk/notify.h 00:02:55.202 TEST_HEADER include/spdk/nvme_intel.h 00:02:55.202 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:55.202 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:55.202 TEST_HEADER include/spdk/nvme_zns.h 00:02:55.202 TEST_HEADER include/spdk/nvme_spec.h 00:02:55.202 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:55.202 TEST_HEADER include/spdk/nvmf.h 00:02:55.202 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:55.202 TEST_HEADER include/spdk/nvmf_spec.h 00:02:55.202 TEST_HEADER include/spdk/nvmf_transport.h 00:02:55.202 TEST_HEADER include/spdk/opal.h 00:02:55.202 TEST_HEADER include/spdk/opal_spec.h 00:02:55.202 TEST_HEADER include/spdk/pipe.h 00:02:55.202 TEST_HEADER include/spdk/pci_ids.h 00:02:55.202 TEST_HEADER include/spdk/scheduler.h 00:02:55.202 TEST_HEADER include/spdk/queue.h 00:02:55.202 TEST_HEADER include/spdk/rpc.h 00:02:55.202 TEST_HEADER include/spdk/reduce.h 00:02:55.202 TEST_HEADER include/spdk/scsi.h 00:02:55.202 TEST_HEADER include/spdk/scsi_spec.h 00:02:55.202 TEST_HEADER include/spdk/sock.h 00:02:55.202 TEST_HEADER include/spdk/stdinc.h 00:02:55.202 TEST_HEADER include/spdk/thread.h 00:02:55.202 TEST_HEADER include/spdk/string.h 00:02:55.202 TEST_HEADER include/spdk/trace.h 00:02:55.202 TEST_HEADER include/spdk/trace_parser.h 00:02:55.202 CC app/spdk_tgt/spdk_tgt.o 00:02:55.202 TEST_HEADER include/spdk/util.h 00:02:55.202 TEST_HEADER include/spdk/ublk.h 00:02:55.202 TEST_HEADER include/spdk/uuid.h 00:02:55.202 TEST_HEADER include/spdk/tree.h 00:02:55.202 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:55.202 TEST_HEADER include/spdk/version.h 00:02:55.202 TEST_HEADER include/spdk/vmd.h 00:02:55.202 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:55.202 TEST_HEADER include/spdk/xor.h 00:02:55.202 TEST_HEADER include/spdk/vhost.h 00:02:55.202 TEST_HEADER include/spdk/zipf.h 00:02:55.202 CXX test/cpp_headers/accel.o 00:02:55.202 CXX test/cpp_headers/accel_module.o 00:02:55.202 CXX test/cpp_headers/assert.o 00:02:55.202 CXX test/cpp_headers/barrier.o 00:02:55.202 CXX test/cpp_headers/bdev.o 00:02:55.202 CXX test/cpp_headers/bdev_module.o 00:02:55.202 CXX test/cpp_headers/base64.o 00:02:55.202 CXX test/cpp_headers/bit_array.o 00:02:55.202 CXX test/cpp_headers/bdev_zone.o 00:02:55.202 CXX test/cpp_headers/bit_pool.o 00:02:55.202 CXX test/cpp_headers/blobfs.o 00:02:55.202 CXX test/cpp_headers/blob.o 00:02:55.202 CXX test/cpp_headers/conf.o 00:02:55.202 CXX test/cpp_headers/blobfs_bdev.o 00:02:55.202 CXX test/cpp_headers/blob_bdev.o 00:02:55.202 CXX test/cpp_headers/crc32.o 00:02:55.202 CXX test/cpp_headers/config.o 00:02:55.202 CXX test/cpp_headers/crc16.o 00:02:55.202 CXX test/cpp_headers/cpuset.o 00:02:55.202 CXX test/cpp_headers/crc64.o 00:02:55.202 CXX test/cpp_headers/dma.o 00:02:55.202 CXX test/cpp_headers/dif.o 00:02:55.202 CXX test/cpp_headers/endian.o 00:02:55.202 CXX test/cpp_headers/env.o 00:02:55.202 CXX test/cpp_headers/event.o 00:02:55.202 CXX test/cpp_headers/fd_group.o 00:02:55.202 CXX test/cpp_headers/env_dpdk.o 00:02:55.202 CXX test/cpp_headers/file.o 00:02:55.202 CXX test/cpp_headers/fd.o 00:02:55.202 CXX test/cpp_headers/fsdev.o 00:02:55.202 CXX test/cpp_headers/fsdev_module.o 00:02:55.202 CXX test/cpp_headers/ftl.o 00:02:55.202 CXX test/cpp_headers/fuse_dispatcher.o 00:02:55.202 CXX test/cpp_headers/hexlify.o 00:02:55.202 CXX test/cpp_headers/histogram_data.o 00:02:55.202 CXX test/cpp_headers/gpt_spec.o 00:02:55.202 CXX test/cpp_headers/idxd.o 00:02:55.202 CXX test/cpp_headers/init.o 00:02:55.202 CXX test/cpp_headers/idxd_spec.o 00:02:55.202 CXX test/cpp_headers/ioat.o 00:02:55.202 CXX test/cpp_headers/iscsi_spec.o 00:02:55.202 CXX test/cpp_headers/json.o 00:02:55.202 CXX test/cpp_headers/ioat_spec.o 00:02:55.202 CXX test/cpp_headers/jsonrpc.o 00:02:55.202 CXX test/cpp_headers/keyring_module.o 00:02:55.202 CXX test/cpp_headers/keyring.o 00:02:55.202 CXX test/cpp_headers/likely.o 00:02:55.202 CXX test/cpp_headers/lvol.o 00:02:55.202 CXX test/cpp_headers/log.o 00:02:55.202 CXX test/cpp_headers/md5.o 00:02:55.202 CXX test/cpp_headers/memory.o 00:02:55.202 CXX test/cpp_headers/mmio.o 00:02:55.202 CXX test/cpp_headers/net.o 00:02:55.202 CXX test/cpp_headers/nbd.o 00:02:55.202 CXX test/cpp_headers/notify.o 00:02:55.202 CXX test/cpp_headers/nvme_intel.o 00:02:55.202 CXX test/cpp_headers/nvme.o 00:02:55.202 CC examples/ioat/verify/verify.o 00:02:55.202 CXX test/cpp_headers/nvme_ocssd.o 00:02:55.202 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:55.202 CXX test/cpp_headers/nvme_spec.o 00:02:55.202 CXX test/cpp_headers/nvme_zns.o 00:02:55.202 CXX test/cpp_headers/nvmf_cmd.o 00:02:55.202 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:55.202 CXX test/cpp_headers/nvmf.o 00:02:55.202 CXX test/cpp_headers/nvmf_spec.o 00:02:55.202 CXX test/cpp_headers/nvmf_transport.o 00:02:55.202 CC examples/ioat/perf/perf.o 00:02:55.202 CXX test/cpp_headers/opal.o 00:02:55.202 CC examples/util/zipf/zipf.o 00:02:55.478 CC test/app/histogram_perf/histogram_perf.o 00:02:55.478 CC test/env/vtophys/vtophys.o 00:02:55.478 CC test/app/stub/stub.o 00:02:55.478 CC test/app/jsoncat/jsoncat.o 00:02:55.478 CC test/thread/poller_perf/poller_perf.o 00:02:55.478 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:55.478 CC app/fio/nvme/fio_plugin.o 00:02:55.478 CC test/env/pci/pci_ut.o 00:02:55.478 CC test/env/memory/memory_ut.o 00:02:55.478 CC app/fio/bdev/fio_plugin.o 00:02:55.478 CC test/app/bdev_svc/bdev_svc.o 00:02:55.478 CC test/dma/test_dma/test_dma.o 00:02:55.478 LINK spdk_lspci 00:02:55.478 LINK interrupt_tgt 00:02:55.743 LINK rpc_client_test 00:02:55.743 LINK iscsi_tgt 00:02:55.743 LINK spdk_nvme_discover 00:02:55.743 CC test/env/mem_callbacks/mem_callbacks.o 00:02:55.743 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:55.743 LINK nvmf_tgt 00:02:55.743 LINK spdk_tgt 00:02:55.743 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:55.743 LINK zipf 00:02:55.743 LINK histogram_perf 00:02:56.003 LINK poller_perf 00:02:56.003 CXX test/cpp_headers/opal_spec.o 00:02:56.003 LINK spdk_trace_record 00:02:56.003 CXX test/cpp_headers/pci_ids.o 00:02:56.003 CXX test/cpp_headers/pipe.o 00:02:56.003 CXX test/cpp_headers/queue.o 00:02:56.003 CXX test/cpp_headers/reduce.o 00:02:56.003 CXX test/cpp_headers/rpc.o 00:02:56.003 LINK env_dpdk_post_init 00:02:56.003 CXX test/cpp_headers/scheduler.o 00:02:56.003 LINK stub 00:02:56.003 CXX test/cpp_headers/scsi.o 00:02:56.003 CXX test/cpp_headers/scsi_spec.o 00:02:56.003 CXX test/cpp_headers/sock.o 00:02:56.003 CXX test/cpp_headers/stdinc.o 00:02:56.003 CXX test/cpp_headers/string.o 00:02:56.003 CXX test/cpp_headers/thread.o 00:02:56.003 CXX test/cpp_headers/trace.o 00:02:56.003 CXX test/cpp_headers/trace_parser.o 00:02:56.003 CXX test/cpp_headers/tree.o 00:02:56.003 CXX test/cpp_headers/ublk.o 00:02:56.003 CXX test/cpp_headers/util.o 00:02:56.003 CXX test/cpp_headers/uuid.o 00:02:56.003 CXX test/cpp_headers/version.o 00:02:56.003 CXX test/cpp_headers/vfio_user_pci.o 00:02:56.003 CXX test/cpp_headers/vhost.o 00:02:56.003 CXX test/cpp_headers/vfio_user_spec.o 00:02:56.003 CXX test/cpp_headers/vmd.o 00:02:56.003 CXX test/cpp_headers/xor.o 00:02:56.003 CXX test/cpp_headers/zipf.o 00:02:56.003 LINK jsoncat 00:02:56.003 LINK bdev_svc 00:02:56.003 LINK vtophys 00:02:56.003 LINK spdk_trace 00:02:56.003 LINK verify 00:02:56.003 LINK ioat_perf 00:02:56.003 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:56.003 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:56.261 LINK spdk_dd 00:02:56.261 LINK pci_ut 00:02:56.261 LINK spdk_nvme 00:02:56.261 CC examples/vmd/led/led.o 00:02:56.261 LINK spdk_bdev 00:02:56.519 CC examples/sock/hello_world/hello_sock.o 00:02:56.519 CC examples/idxd/perf/perf.o 00:02:56.519 LINK spdk_nvme_identify 00:02:56.519 CC examples/vmd/lsvmd/lsvmd.o 00:02:56.519 CC test/event/reactor/reactor.o 00:02:56.519 CC test/event/event_perf/event_perf.o 00:02:56.519 CC examples/thread/thread/thread_ex.o 00:02:56.519 CC app/vhost/vhost.o 00:02:56.519 CC test/event/reactor_perf/reactor_perf.o 00:02:56.519 LINK nvme_fuzz 00:02:56.519 CC test/event/app_repeat/app_repeat.o 00:02:56.519 LINK test_dma 00:02:56.519 CC test/event/scheduler/scheduler.o 00:02:56.519 LINK led 00:02:56.519 LINK vhost_fuzz 00:02:56.519 LINK spdk_nvme_perf 00:02:56.519 LINK lsvmd 00:02:56.519 LINK reactor 00:02:56.519 LINK mem_callbacks 00:02:56.519 LINK reactor_perf 00:02:56.519 LINK event_perf 00:02:56.519 LINK spdk_top 00:02:56.519 LINK vhost 00:02:56.519 LINK hello_sock 00:02:56.519 LINK app_repeat 00:02:56.778 LINK thread 00:02:56.778 LINK idxd_perf 00:02:56.778 LINK scheduler 00:02:57.036 LINK memory_ut 00:02:57.036 CC test/nvme/reset/reset.o 00:02:57.036 CC test/nvme/boot_partition/boot_partition.o 00:02:57.036 CC test/nvme/e2edp/nvme_dp.o 00:02:57.036 CC test/nvme/compliance/nvme_compliance.o 00:02:57.036 CC test/nvme/connect_stress/connect_stress.o 00:02:57.036 CC test/nvme/fdp/fdp.o 00:02:57.036 CC test/nvme/err_injection/err_injection.o 00:02:57.036 CC test/nvme/overhead/overhead.o 00:02:57.036 CC test/nvme/startup/startup.o 00:02:57.036 CC test/nvme/sgl/sgl.o 00:02:57.036 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:57.036 CC test/nvme/cuse/cuse.o 00:02:57.036 CC test/nvme/aer/aer.o 00:02:57.036 CC test/nvme/simple_copy/simple_copy.o 00:02:57.036 CC test/nvme/fused_ordering/fused_ordering.o 00:02:57.036 CC test/nvme/reserve/reserve.o 00:02:57.036 CC test/accel/dif/dif.o 00:02:57.036 CC test/blobfs/mkfs/mkfs.o 00:02:57.036 CC examples/nvme/reconnect/reconnect.o 00:02:57.036 CC examples/nvme/hello_world/hello_world.o 00:02:57.036 CC examples/nvme/abort/abort.o 00:02:57.036 CC examples/nvme/arbitration/arbitration.o 00:02:57.036 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:57.036 CC examples/nvme/hotplug/hotplug.o 00:02:57.036 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.036 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:57.036 CC test/lvol/esnap/esnap.o 00:02:57.036 CC examples/accel/perf/accel_perf.o 00:02:57.295 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:57.295 LINK startup 00:02:57.295 CC examples/blob/hello_world/hello_blob.o 00:02:57.295 LINK boot_partition 00:02:57.295 CC examples/blob/cli/blobcli.o 00:02:57.295 LINK connect_stress 00:02:57.295 LINK err_injection 00:02:57.295 LINK doorbell_aers 00:02:57.295 LINK reserve 00:02:57.295 LINK fused_ordering 00:02:57.295 LINK simple_copy 00:02:57.295 LINK reset 00:02:57.295 LINK mkfs 00:02:57.295 LINK aer 00:02:57.295 LINK nvme_dp 00:02:57.295 LINK cmb_copy 00:02:57.295 LINK sgl 00:02:57.295 LINK pmr_persistence 00:02:57.295 LINK nvme_compliance 00:02:57.295 LINK fdp 00:02:57.295 LINK overhead 00:02:57.295 LINK hello_world 00:02:57.295 LINK hotplug 00:02:57.553 LINK arbitration 00:02:57.553 LINK reconnect 00:02:57.553 LINK abort 00:02:57.553 LINK hello_blob 00:02:57.553 LINK hello_fsdev 00:02:57.553 LINK iscsi_fuzz 00:02:57.553 LINK nvme_manage 00:02:57.553 LINK accel_perf 00:02:57.553 LINK blobcli 00:02:57.553 LINK dif 00:02:58.119 LINK cuse 00:02:58.119 CC examples/bdev/hello_world/hello_bdev.o 00:02:58.119 CC examples/bdev/bdevperf/bdevperf.o 00:02:58.119 CC test/bdev/bdevio/bdevio.o 00:02:58.377 LINK hello_bdev 00:02:58.636 LINK bdevio 00:02:58.636 LINK bdevperf 00:02:59.204 CC examples/nvmf/nvmf/nvmf.o 00:02:59.462 LINK nvmf 00:03:00.839 LINK esnap 00:03:00.839 00:03:00.839 real 0m56.761s 00:03:00.839 user 8m25.274s 00:03:00.839 sys 3m50.441s 00:03:00.839 17:13:27 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:00.839 17:13:27 make -- common/autotest_common.sh@10 -- $ set +x 00:03:00.839 ************************************ 00:03:00.839 END TEST make 00:03:00.840 ************************************ 00:03:01.099 17:13:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:01.099 17:13:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:01.099 17:13:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:01.099 17:13:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.099 17:13:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:01.099 17:13:27 -- pm/common@44 -- $ pid=1626533 00:03:01.099 17:13:27 -- pm/common@50 -- $ kill -TERM 1626533 00:03:01.099 17:13:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.099 17:13:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:01.099 17:13:27 -- pm/common@44 -- $ pid=1626535 00:03:01.099 17:13:27 -- pm/common@50 -- $ kill -TERM 1626535 00:03:01.099 17:13:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.099 17:13:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:01.099 17:13:27 -- pm/common@44 -- $ pid=1626536 00:03:01.099 17:13:27 -- pm/common@50 -- $ kill -TERM 1626536 00:03:01.099 17:13:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.099 17:13:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:01.099 17:13:27 -- pm/common@44 -- $ pid=1626561 00:03:01.099 17:13:27 -- pm/common@50 -- $ sudo -E kill -TERM 1626561 00:03:01.099 17:13:27 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:01.099 17:13:27 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:01.099 17:13:27 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:01.099 17:13:27 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:01.099 17:13:27 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:01.099 17:13:27 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:01.099 17:13:27 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:01.099 17:13:27 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:01.099 17:13:27 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:01.099 17:13:27 -- scripts/common.sh@336 -- # IFS=.-: 00:03:01.099 17:13:27 -- scripts/common.sh@336 -- # read -ra ver1 00:03:01.099 17:13:27 -- scripts/common.sh@337 -- # IFS=.-: 00:03:01.099 17:13:27 -- scripts/common.sh@337 -- # read -ra ver2 00:03:01.099 17:13:27 -- scripts/common.sh@338 -- # local 'op=<' 00:03:01.099 17:13:27 -- scripts/common.sh@340 -- # ver1_l=2 00:03:01.099 17:13:27 -- scripts/common.sh@341 -- # ver2_l=1 00:03:01.099 17:13:27 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:01.099 17:13:27 -- scripts/common.sh@344 -- # case "$op" in 00:03:01.099 17:13:27 -- scripts/common.sh@345 -- # : 1 00:03:01.099 17:13:27 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:01.099 17:13:27 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:01.099 17:13:27 -- scripts/common.sh@365 -- # decimal 1 00:03:01.099 17:13:27 -- scripts/common.sh@353 -- # local d=1 00:03:01.099 17:13:27 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:01.099 17:13:27 -- scripts/common.sh@355 -- # echo 1 00:03:01.099 17:13:27 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:01.099 17:13:27 -- scripts/common.sh@366 -- # decimal 2 00:03:01.099 17:13:27 -- scripts/common.sh@353 -- # local d=2 00:03:01.099 17:13:27 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:01.099 17:13:27 -- scripts/common.sh@355 -- # echo 2 00:03:01.099 17:13:27 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:01.099 17:13:27 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:01.099 17:13:27 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:01.099 17:13:27 -- scripts/common.sh@368 -- # return 0 00:03:01.099 17:13:27 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:01.099 17:13:27 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:01.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.099 --rc genhtml_branch_coverage=1 00:03:01.099 --rc genhtml_function_coverage=1 00:03:01.099 --rc genhtml_legend=1 00:03:01.099 --rc geninfo_all_blocks=1 00:03:01.099 --rc geninfo_unexecuted_blocks=1 00:03:01.099 00:03:01.099 ' 00:03:01.099 17:13:27 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:01.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.099 --rc genhtml_branch_coverage=1 00:03:01.099 --rc genhtml_function_coverage=1 00:03:01.099 --rc genhtml_legend=1 00:03:01.099 --rc geninfo_all_blocks=1 00:03:01.099 --rc geninfo_unexecuted_blocks=1 00:03:01.099 00:03:01.099 ' 00:03:01.099 17:13:27 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:01.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.099 --rc genhtml_branch_coverage=1 00:03:01.099 --rc genhtml_function_coverage=1 00:03:01.099 --rc genhtml_legend=1 00:03:01.099 --rc geninfo_all_blocks=1 00:03:01.099 --rc geninfo_unexecuted_blocks=1 00:03:01.099 00:03:01.099 ' 00:03:01.099 17:13:27 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:01.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.099 --rc genhtml_branch_coverage=1 00:03:01.099 --rc genhtml_function_coverage=1 00:03:01.099 --rc genhtml_legend=1 00:03:01.099 --rc geninfo_all_blocks=1 00:03:01.099 --rc geninfo_unexecuted_blocks=1 00:03:01.099 00:03:01.099 ' 00:03:01.099 17:13:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:01.099 17:13:27 -- nvmf/common.sh@7 -- # uname -s 00:03:01.099 17:13:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:01.099 17:13:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:01.099 17:13:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:01.099 17:13:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:01.099 17:13:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:01.099 17:13:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:01.099 17:13:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:01.099 17:13:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:01.099 17:13:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:01.099 17:13:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:01.358 17:13:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:01.358 17:13:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:01.358 17:13:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:01.358 17:13:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:01.358 17:13:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:01.358 17:13:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:01.358 17:13:27 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:01.358 17:13:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:01.358 17:13:27 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:01.359 17:13:27 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:01.359 17:13:27 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:01.359 17:13:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.359 17:13:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.359 17:13:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.359 17:13:27 -- paths/export.sh@5 -- # export PATH 00:03:01.359 17:13:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.359 17:13:27 -- nvmf/common.sh@51 -- # : 0 00:03:01.359 17:13:27 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:01.359 17:13:27 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:01.359 17:13:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:01.359 17:13:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:01.359 17:13:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:01.359 17:13:27 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:01.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:01.359 17:13:27 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:01.359 17:13:27 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:01.359 17:13:27 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:01.359 17:13:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:01.359 17:13:27 -- spdk/autotest.sh@32 -- # uname -s 00:03:01.359 17:13:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:01.359 17:13:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:01.359 17:13:27 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:01.359 17:13:27 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:01.359 17:13:27 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:01.359 17:13:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:01.359 17:13:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:01.359 17:13:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:01.359 17:13:27 -- spdk/autotest.sh@48 -- # udevadm_pid=1690462 00:03:01.359 17:13:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:01.359 17:13:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:01.359 17:13:27 -- pm/common@17 -- # local monitor 00:03:01.359 17:13:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.359 17:13:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.359 17:13:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.359 17:13:27 -- pm/common@21 -- # date +%s 00:03:01.359 17:13:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.359 17:13:27 -- pm/common@21 -- # date +%s 00:03:01.359 17:13:27 -- pm/common@25 -- # sleep 1 00:03:01.359 17:13:27 -- pm/common@21 -- # date +%s 00:03:01.359 17:13:27 -- pm/common@21 -- # date +%s 00:03:01.359 17:13:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733760807 00:03:01.359 17:13:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733760807 00:03:01.359 17:13:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733760807 00:03:01.359 17:13:27 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733760807 00:03:01.359 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733760807_collect-cpu-load.pm.log 00:03:01.359 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733760807_collect-vmstat.pm.log 00:03:01.359 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733760807_collect-cpu-temp.pm.log 00:03:01.359 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733760807_collect-bmc-pm.bmc.pm.log 00:03:02.295 17:13:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:02.295 17:13:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:02.295 17:13:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:02.295 17:13:28 -- common/autotest_common.sh@10 -- # set +x 00:03:02.295 17:13:28 -- spdk/autotest.sh@59 -- # create_test_list 00:03:02.295 17:13:28 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:02.295 17:13:28 -- common/autotest_common.sh@10 -- # set +x 00:03:02.295 17:13:28 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:02.295 17:13:28 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.295 17:13:28 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.295 17:13:28 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:02.295 17:13:28 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.295 17:13:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:02.295 17:13:28 -- common/autotest_common.sh@1457 -- # uname 00:03:02.295 17:13:28 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:02.295 17:13:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:02.295 17:13:28 -- common/autotest_common.sh@1477 -- # uname 00:03:02.295 17:13:28 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:02.295 17:13:28 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:02.295 17:13:28 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:02.295 lcov: LCOV version 1.15 00:03:02.295 17:13:28 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:14.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:14.506 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:29.390 17:13:53 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:29.390 17:13:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:29.390 17:13:53 -- common/autotest_common.sh@10 -- # set +x 00:03:29.390 17:13:53 -- spdk/autotest.sh@78 -- # rm -f 00:03:29.390 17:13:53 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.649 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:29.907 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:29.907 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:29.907 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:29.907 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:29.907 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:29.907 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:29.907 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:29.907 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:29.907 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:29.907 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:29.907 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:29.907 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:30.178 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:30.178 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:30.178 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:30.178 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:30.178 17:13:56 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:30.178 17:13:56 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:30.178 17:13:56 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:30.178 17:13:56 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:30.178 17:13:56 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:30.178 17:13:56 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:30.178 17:13:56 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:30.178 17:13:56 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:30.178 17:13:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:30.178 17:13:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:30.178 17:13:56 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:30.178 17:13:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:30.178 17:13:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:30.178 17:13:56 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:30.178 17:13:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:30.178 17:13:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:30.178 17:13:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:30.178 17:13:56 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:30.178 17:13:56 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:30.178 No valid GPT data, bailing 00:03:30.178 17:13:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:30.178 17:13:56 -- scripts/common.sh@394 -- # pt= 00:03:30.178 17:13:56 -- scripts/common.sh@395 -- # return 1 00:03:30.178 17:13:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:30.178 1+0 records in 00:03:30.178 1+0 records out 00:03:30.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00175334 s, 598 MB/s 00:03:30.178 17:13:56 -- spdk/autotest.sh@105 -- # sync 00:03:30.178 17:13:56 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:30.178 17:13:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:30.178 17:13:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:36.763 17:14:02 -- spdk/autotest.sh@111 -- # uname -s 00:03:36.763 17:14:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:36.763 17:14:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:36.763 17:14:02 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:38.670 Hugepages 00:03:38.670 node hugesize free / total 00:03:38.670 node0 1048576kB 0 / 0 00:03:38.670 node0 2048kB 0 / 0 00:03:38.670 node1 1048576kB 0 / 0 00:03:38.670 node1 2048kB 0 / 0 00:03:38.670 00:03:38.670 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.670 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:38.670 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:38.670 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:38.670 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:38.670 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:38.670 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:38.670 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:38.670 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:38.670 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:38.670 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:38.670 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:38.670 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:38.670 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:38.670 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:38.670 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:38.670 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:38.671 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:38.671 17:14:04 -- spdk/autotest.sh@117 -- # uname -s 00:03:38.671 17:14:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:38.671 17:14:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:38.671 17:14:04 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.207 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.207 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.207 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.467 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:42.406 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.406 17:14:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:43.344 17:14:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:43.344 17:14:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:43.344 17:14:09 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:43.344 17:14:09 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:43.344 17:14:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:43.344 17:14:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:43.344 17:14:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:43.344 17:14:09 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:43.344 17:14:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:43.603 17:14:09 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:43.603 17:14:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:43.603 17:14:09 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.140 Waiting for block devices as requested 00:03:46.140 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:46.399 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:46.399 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:46.658 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:46.658 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:46.658 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:46.658 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:46.918 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:46.918 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:46.918 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:47.177 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:47.177 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:47.177 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:47.474 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:47.474 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:47.474 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:47.474 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:47.805 17:14:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:47.805 17:14:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:47.805 17:14:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:47.805 17:14:14 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:47.805 17:14:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:47.805 17:14:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:47.805 17:14:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:47.805 17:14:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:47.805 17:14:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:47.805 17:14:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:47.805 17:14:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:47.805 17:14:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:47.805 17:14:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:47.805 17:14:14 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:47.805 17:14:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:47.805 17:14:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:47.805 17:14:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:47.805 17:14:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:47.805 17:14:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:47.805 17:14:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:47.805 17:14:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:47.805 17:14:14 -- common/autotest_common.sh@1543 -- # continue 00:03:47.805 17:14:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:47.805 17:14:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.805 17:14:14 -- common/autotest_common.sh@10 -- # set +x 00:03:47.805 17:14:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:47.805 17:14:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.805 17:14:14 -- common/autotest_common.sh@10 -- # set +x 00:03:47.805 17:14:14 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.370 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:50.370 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:50.370 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:50.630 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:51.567 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:51.567 17:14:18 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:51.567 17:14:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.567 17:14:18 -- common/autotest_common.sh@10 -- # set +x 00:03:51.567 17:14:18 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:51.567 17:14:18 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:51.567 17:14:18 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:51.567 17:14:18 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:51.567 17:14:18 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:51.567 17:14:18 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:51.567 17:14:18 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:51.567 17:14:18 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:51.567 17:14:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:51.567 17:14:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:51.567 17:14:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.567 17:14:18 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:51.567 17:14:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:51.826 17:14:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:51.826 17:14:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:51.826 17:14:18 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:51.826 17:14:18 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:51.826 17:14:18 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:51.826 17:14:18 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:51.826 17:14:18 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:51.826 17:14:18 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:51.826 17:14:18 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:51.826 17:14:18 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:51.826 17:14:18 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1704587 00:03:51.826 17:14:18 -- common/autotest_common.sh@1585 -- # waitforlisten 1704587 00:03:51.826 17:14:18 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.826 17:14:18 -- common/autotest_common.sh@835 -- # '[' -z 1704587 ']' 00:03:51.826 17:14:18 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.826 17:14:18 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:51.826 17:14:18 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.826 17:14:18 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:51.826 17:14:18 -- common/autotest_common.sh@10 -- # set +x 00:03:51.826 [2024-12-09 17:14:18.184439] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:03:51.827 [2024-12-09 17:14:18.184492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1704587 ] 00:03:51.827 [2024-12-09 17:14:18.259431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.827 [2024-12-09 17:14:18.300836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.085 17:14:18 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:52.085 17:14:18 -- common/autotest_common.sh@868 -- # return 0 00:03:52.085 17:14:18 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:52.085 17:14:18 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:52.085 17:14:18 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:55.369 nvme0n1 00:03:55.369 17:14:21 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:55.369 [2024-12-09 17:14:21.705779] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 1 00:03:55.369 [2024-12-09 17:14:21.705811] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 1 00:03:55.369 request: 00:03:55.369 { 00:03:55.369 "nvme_ctrlr_name": "nvme0", 00:03:55.369 "password": "test", 00:03:55.369 "method": "bdev_nvme_opal_revert", 00:03:55.369 "req_id": 1 00:03:55.369 } 00:03:55.369 Got JSON-RPC error response 00:03:55.369 response: 00:03:55.369 { 00:03:55.369 "code": -32603, 00:03:55.369 "message": "Internal error" 00:03:55.369 } 00:03:55.369 17:14:21 -- common/autotest_common.sh@1591 -- # true 00:03:55.369 17:14:21 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:55.369 17:14:21 -- common/autotest_common.sh@1595 -- # killprocess 1704587 00:03:55.369 17:14:21 -- common/autotest_common.sh@954 -- # '[' -z 1704587 ']' 00:03:55.369 17:14:21 -- common/autotest_common.sh@958 -- # kill -0 1704587 00:03:55.369 17:14:21 -- common/autotest_common.sh@959 -- # uname 00:03:55.369 17:14:21 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:55.369 17:14:21 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1704587 00:03:55.369 17:14:21 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:55.369 17:14:21 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:55.369 17:14:21 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1704587' 00:03:55.369 killing process with pid 1704587 00:03:55.369 17:14:21 -- common/autotest_common.sh@973 -- # kill 1704587 00:03:55.369 17:14:21 -- common/autotest_common.sh@978 -- # wait 1704587 00:03:57.271 17:14:23 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:57.271 17:14:23 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:57.271 17:14:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:57.271 17:14:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:57.271 17:14:23 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:57.271 17:14:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.271 17:14:23 -- common/autotest_common.sh@10 -- # set +x 00:03:57.271 17:14:23 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:57.271 17:14:23 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:57.271 17:14:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.271 17:14:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.271 17:14:23 -- common/autotest_common.sh@10 -- # set +x 00:03:57.271 ************************************ 00:03:57.271 START TEST env 00:03:57.271 ************************************ 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:57.271 * Looking for test storage... 00:03:57.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:57.271 17:14:23 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.271 17:14:23 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.271 17:14:23 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.271 17:14:23 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.271 17:14:23 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.271 17:14:23 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.271 17:14:23 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.271 17:14:23 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.271 17:14:23 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.271 17:14:23 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.271 17:14:23 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.271 17:14:23 env -- scripts/common.sh@344 -- # case "$op" in 00:03:57.271 17:14:23 env -- scripts/common.sh@345 -- # : 1 00:03:57.271 17:14:23 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.271 17:14:23 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.271 17:14:23 env -- scripts/common.sh@365 -- # decimal 1 00:03:57.271 17:14:23 env -- scripts/common.sh@353 -- # local d=1 00:03:57.271 17:14:23 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.271 17:14:23 env -- scripts/common.sh@355 -- # echo 1 00:03:57.271 17:14:23 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.271 17:14:23 env -- scripts/common.sh@366 -- # decimal 2 00:03:57.271 17:14:23 env -- scripts/common.sh@353 -- # local d=2 00:03:57.271 17:14:23 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.271 17:14:23 env -- scripts/common.sh@355 -- # echo 2 00:03:57.271 17:14:23 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.271 17:14:23 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.271 17:14:23 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.271 17:14:23 env -- scripts/common.sh@368 -- # return 0 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:57.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.271 --rc genhtml_branch_coverage=1 00:03:57.271 --rc genhtml_function_coverage=1 00:03:57.271 --rc genhtml_legend=1 00:03:57.271 --rc geninfo_all_blocks=1 00:03:57.271 --rc geninfo_unexecuted_blocks=1 00:03:57.271 00:03:57.271 ' 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:57.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.271 --rc genhtml_branch_coverage=1 00:03:57.271 --rc genhtml_function_coverage=1 00:03:57.271 --rc genhtml_legend=1 00:03:57.271 --rc geninfo_all_blocks=1 00:03:57.271 --rc geninfo_unexecuted_blocks=1 00:03:57.271 00:03:57.271 ' 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:57.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.271 --rc genhtml_branch_coverage=1 00:03:57.271 --rc genhtml_function_coverage=1 00:03:57.271 --rc genhtml_legend=1 00:03:57.271 --rc geninfo_all_blocks=1 00:03:57.271 --rc geninfo_unexecuted_blocks=1 00:03:57.271 00:03:57.271 ' 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:57.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.271 --rc genhtml_branch_coverage=1 00:03:57.271 --rc genhtml_function_coverage=1 00:03:57.271 --rc genhtml_legend=1 00:03:57.271 --rc geninfo_all_blocks=1 00:03:57.271 --rc geninfo_unexecuted_blocks=1 00:03:57.271 00:03:57.271 ' 00:03:57.271 17:14:23 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.271 17:14:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.271 ************************************ 00:03:57.271 START TEST env_memory 00:03:57.271 ************************************ 00:03:57.271 17:14:23 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:57.271 00:03:57.271 00:03:57.271 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.271 http://cunit.sourceforge.net/ 00:03:57.271 00:03:57.271 00:03:57.271 Suite: memory 00:03:57.271 Test: alloc and free memory map ...[2024-12-09 17:14:23.666793] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:57.271 passed 00:03:57.271 Test: mem map translation ...[2024-12-09 17:14:23.684243] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:57.271 [2024-12-09 17:14:23.684256] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:57.271 [2024-12-09 17:14:23.684292] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:57.271 [2024-12-09 17:14:23.684298] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:57.271 passed 00:03:57.271 Test: mem map registration ...[2024-12-09 17:14:23.722943] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:57.271 [2024-12-09 17:14:23.722956] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:57.271 passed 00:03:57.271 Test: mem map adjacent registrations ...passed 00:03:57.271 00:03:57.271 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.271 suites 1 1 n/a 0 0 00:03:57.271 tests 4 4 4 0 0 00:03:57.271 asserts 152 152 152 0 n/a 00:03:57.271 00:03:57.271 Elapsed time = 0.138 seconds 00:03:57.271 00:03:57.271 real 0m0.151s 00:03:57.271 user 0m0.142s 00:03:57.271 sys 0m0.009s 00:03:57.271 17:14:23 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.271 17:14:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:57.271 ************************************ 00:03:57.271 END TEST env_memory 00:03:57.271 ************************************ 00:03:57.271 17:14:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.271 17:14:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.271 17:14:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.531 ************************************ 00:03:57.531 START TEST env_vtophys 00:03:57.531 ************************************ 00:03:57.531 17:14:23 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:57.531 EAL: lib.eal log level changed from notice to debug 00:03:57.531 EAL: Detected lcore 0 as core 0 on socket 0 00:03:57.531 EAL: Detected lcore 1 as core 1 on socket 0 00:03:57.531 EAL: Detected lcore 2 as core 2 on socket 0 00:03:57.531 EAL: Detected lcore 3 as core 3 on socket 0 00:03:57.531 EAL: Detected lcore 4 as core 4 on socket 0 00:03:57.531 EAL: Detected lcore 5 as core 5 on socket 0 00:03:57.531 EAL: Detected lcore 6 as core 6 on socket 0 00:03:57.531 EAL: Detected lcore 7 as core 8 on socket 0 00:03:57.531 EAL: Detected lcore 8 as core 9 on socket 0 00:03:57.531 EAL: Detected lcore 9 as core 10 on socket 0 00:03:57.531 EAL: Detected lcore 10 as core 11 on socket 0 00:03:57.531 EAL: Detected lcore 11 as core 12 on socket 0 00:03:57.531 EAL: Detected lcore 12 as core 13 on socket 0 00:03:57.531 EAL: Detected lcore 13 as core 16 on socket 0 00:03:57.531 EAL: Detected lcore 14 as core 17 on socket 0 00:03:57.531 EAL: Detected lcore 15 as core 18 on socket 0 00:03:57.531 EAL: Detected lcore 16 as core 19 on socket 0 00:03:57.531 EAL: Detected lcore 17 as core 20 on socket 0 00:03:57.531 EAL: Detected lcore 18 as core 21 on socket 0 00:03:57.531 EAL: Detected lcore 19 as core 25 on socket 0 00:03:57.531 EAL: Detected lcore 20 as core 26 on socket 0 00:03:57.531 EAL: Detected lcore 21 as core 27 on socket 0 00:03:57.531 EAL: Detected lcore 22 as core 28 on socket 0 00:03:57.531 EAL: Detected lcore 23 as core 29 on socket 0 00:03:57.531 EAL: Detected lcore 24 as core 0 on socket 1 00:03:57.531 EAL: Detected lcore 25 as core 1 on socket 1 00:03:57.531 EAL: Detected lcore 26 as core 2 on socket 1 00:03:57.531 EAL: Detected lcore 27 as core 3 on socket 1 00:03:57.531 EAL: Detected lcore 28 as core 4 on socket 1 00:03:57.531 EAL: Detected lcore 29 as core 5 on socket 1 00:03:57.531 EAL: Detected lcore 30 as core 6 on socket 1 00:03:57.531 EAL: Detected lcore 31 as core 8 on socket 1 00:03:57.531 EAL: Detected lcore 32 as core 9 on socket 1 00:03:57.531 EAL: Detected lcore 33 as core 10 on socket 1 00:03:57.531 EAL: Detected lcore 34 as core 11 on socket 1 00:03:57.531 EAL: Detected lcore 35 as core 12 on socket 1 00:03:57.531 EAL: Detected lcore 36 as core 13 on socket 1 00:03:57.531 EAL: Detected lcore 37 as core 16 on socket 1 00:03:57.531 EAL: Detected lcore 38 as core 17 on socket 1 00:03:57.531 EAL: Detected lcore 39 as core 18 on socket 1 00:03:57.531 EAL: Detected lcore 40 as core 19 on socket 1 00:03:57.531 EAL: Detected lcore 41 as core 20 on socket 1 00:03:57.531 EAL: Detected lcore 42 as core 21 on socket 1 00:03:57.531 EAL: Detected lcore 43 as core 25 on socket 1 00:03:57.531 EAL: Detected lcore 44 as core 26 on socket 1 00:03:57.531 EAL: Detected lcore 45 as core 27 on socket 1 00:03:57.531 EAL: Detected lcore 46 as core 28 on socket 1 00:03:57.531 EAL: Detected lcore 47 as core 29 on socket 1 00:03:57.531 EAL: Detected lcore 48 as core 0 on socket 0 00:03:57.531 EAL: Detected lcore 49 as core 1 on socket 0 00:03:57.531 EAL: Detected lcore 50 as core 2 on socket 0 00:03:57.531 EAL: Detected lcore 51 as core 3 on socket 0 00:03:57.531 EAL: Detected lcore 52 as core 4 on socket 0 00:03:57.531 EAL: Detected lcore 53 as core 5 on socket 0 00:03:57.531 EAL: Detected lcore 54 as core 6 on socket 0 00:03:57.531 EAL: Detected lcore 55 as core 8 on socket 0 00:03:57.531 EAL: Detected lcore 56 as core 9 on socket 0 00:03:57.531 EAL: Detected lcore 57 as core 10 on socket 0 00:03:57.531 EAL: Detected lcore 58 as core 11 on socket 0 00:03:57.531 EAL: Detected lcore 59 as core 12 on socket 0 00:03:57.532 EAL: Detected lcore 60 as core 13 on socket 0 00:03:57.532 EAL: Detected lcore 61 as core 16 on socket 0 00:03:57.532 EAL: Detected lcore 62 as core 17 on socket 0 00:03:57.532 EAL: Detected lcore 63 as core 18 on socket 0 00:03:57.532 EAL: Detected lcore 64 as core 19 on socket 0 00:03:57.532 EAL: Detected lcore 65 as core 20 on socket 0 00:03:57.532 EAL: Detected lcore 66 as core 21 on socket 0 00:03:57.532 EAL: Detected lcore 67 as core 25 on socket 0 00:03:57.532 EAL: Detected lcore 68 as core 26 on socket 0 00:03:57.532 EAL: Detected lcore 69 as core 27 on socket 0 00:03:57.532 EAL: Detected lcore 70 as core 28 on socket 0 00:03:57.532 EAL: Detected lcore 71 as core 29 on socket 0 00:03:57.532 EAL: Detected lcore 72 as core 0 on socket 1 00:03:57.532 EAL: Detected lcore 73 as core 1 on socket 1 00:03:57.532 EAL: Detected lcore 74 as core 2 on socket 1 00:03:57.532 EAL: Detected lcore 75 as core 3 on socket 1 00:03:57.532 EAL: Detected lcore 76 as core 4 on socket 1 00:03:57.532 EAL: Detected lcore 77 as core 5 on socket 1 00:03:57.532 EAL: Detected lcore 78 as core 6 on socket 1 00:03:57.532 EAL: Detected lcore 79 as core 8 on socket 1 00:03:57.532 EAL: Detected lcore 80 as core 9 on socket 1 00:03:57.532 EAL: Detected lcore 81 as core 10 on socket 1 00:03:57.532 EAL: Detected lcore 82 as core 11 on socket 1 00:03:57.532 EAL: Detected lcore 83 as core 12 on socket 1 00:03:57.532 EAL: Detected lcore 84 as core 13 on socket 1 00:03:57.532 EAL: Detected lcore 85 as core 16 on socket 1 00:03:57.532 EAL: Detected lcore 86 as core 17 on socket 1 00:03:57.532 EAL: Detected lcore 87 as core 18 on socket 1 00:03:57.532 EAL: Detected lcore 88 as core 19 on socket 1 00:03:57.532 EAL: Detected lcore 89 as core 20 on socket 1 00:03:57.532 EAL: Detected lcore 90 as core 21 on socket 1 00:03:57.532 EAL: Detected lcore 91 as core 25 on socket 1 00:03:57.532 EAL: Detected lcore 92 as core 26 on socket 1 00:03:57.532 EAL: Detected lcore 93 as core 27 on socket 1 00:03:57.532 EAL: Detected lcore 94 as core 28 on socket 1 00:03:57.532 EAL: Detected lcore 95 as core 29 on socket 1 00:03:57.532 EAL: Maximum logical cores by configuration: 128 00:03:57.532 EAL: Detected CPU lcores: 96 00:03:57.532 EAL: Detected NUMA nodes: 2 00:03:57.532 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:57.532 EAL: Detected shared linkage of DPDK 00:03:57.532 EAL: No shared files mode enabled, IPC will be disabled 00:03:57.532 EAL: Bus pci wants IOVA as 'DC' 00:03:57.532 EAL: Buses did not request a specific IOVA mode. 00:03:57.532 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:57.532 EAL: Selected IOVA mode 'VA' 00:03:57.532 EAL: Probing VFIO support... 00:03:57.532 EAL: IOMMU type 1 (Type 1) is supported 00:03:57.532 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:57.532 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:57.532 EAL: VFIO support initialized 00:03:57.532 EAL: Ask a virtual area of 0x2e000 bytes 00:03:57.532 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:57.532 EAL: Setting up physically contiguous memory... 00:03:57.532 EAL: Setting maximum number of open files to 524288 00:03:57.532 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:57.532 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:57.532 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:57.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.532 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:57.532 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.532 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:57.532 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:57.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.532 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:57.532 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.532 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:57.532 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:57.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.532 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:57.532 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.532 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:57.532 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:57.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.532 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:57.532 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.532 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:57.532 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:57.532 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:57.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.532 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:57.532 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:57.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.532 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:57.532 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:57.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.532 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:57.532 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:57.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.532 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:57.532 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:57.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.532 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:57.532 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:57.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.532 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:57.532 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:57.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.532 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:57.532 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:57.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.532 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:57.532 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:57.532 EAL: Hugepages will be freed exactly as allocated. 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: TSC frequency is ~2100000 KHz 00:03:57.532 EAL: Main lcore 0 is ready (tid=7ff7d3cd3a00;cpuset=[0]) 00:03:57.532 EAL: Trying to obtain current memory policy. 00:03:57.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.532 EAL: Restoring previous memory policy: 0 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was expanded by 2MB 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:57.532 EAL: Mem event callback 'spdk:(nil)' registered 00:03:57.532 00:03:57.532 00:03:57.532 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.532 http://cunit.sourceforge.net/ 00:03:57.532 00:03:57.532 00:03:57.532 Suite: components_suite 00:03:57.532 Test: vtophys_malloc_test ...passed 00:03:57.532 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:57.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.532 EAL: Restoring previous memory policy: 4 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was expanded by 4MB 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was shrunk by 4MB 00:03:57.532 EAL: Trying to obtain current memory policy. 00:03:57.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.532 EAL: Restoring previous memory policy: 4 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was expanded by 6MB 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was shrunk by 6MB 00:03:57.532 EAL: Trying to obtain current memory policy. 00:03:57.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.532 EAL: Restoring previous memory policy: 4 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was expanded by 10MB 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was shrunk by 10MB 00:03:57.532 EAL: Trying to obtain current memory policy. 00:03:57.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.532 EAL: Restoring previous memory policy: 4 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was expanded by 18MB 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was shrunk by 18MB 00:03:57.532 EAL: Trying to obtain current memory policy. 00:03:57.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.532 EAL: Restoring previous memory policy: 4 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was expanded by 34MB 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was shrunk by 34MB 00:03:57.532 EAL: Trying to obtain current memory policy. 00:03:57.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.532 EAL: Restoring previous memory policy: 4 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was expanded by 66MB 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was shrunk by 66MB 00:03:57.532 EAL: Trying to obtain current memory policy. 00:03:57.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.532 EAL: Restoring previous memory policy: 4 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was expanded by 130MB 00:03:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.532 EAL: request: mp_malloc_sync 00:03:57.532 EAL: No shared files mode enabled, IPC is disabled 00:03:57.532 EAL: Heap on socket 0 was shrunk by 130MB 00:03:57.532 EAL: Trying to obtain current memory policy. 00:03:57.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.791 EAL: Restoring previous memory policy: 4 00:03:57.792 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.792 EAL: request: mp_malloc_sync 00:03:57.792 EAL: No shared files mode enabled, IPC is disabled 00:03:57.792 EAL: Heap on socket 0 was expanded by 258MB 00:03:57.792 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.792 EAL: request: mp_malloc_sync 00:03:57.792 EAL: No shared files mode enabled, IPC is disabled 00:03:57.792 EAL: Heap on socket 0 was shrunk by 258MB 00:03:57.792 EAL: Trying to obtain current memory policy. 00:03:57.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.792 EAL: Restoring previous memory policy: 4 00:03:57.792 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.792 EAL: request: mp_malloc_sync 00:03:57.792 EAL: No shared files mode enabled, IPC is disabled 00:03:57.792 EAL: Heap on socket 0 was expanded by 514MB 00:03:58.050 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.050 EAL: request: mp_malloc_sync 00:03:58.050 EAL: No shared files mode enabled, IPC is disabled 00:03:58.050 EAL: Heap on socket 0 was shrunk by 514MB 00:03:58.050 EAL: Trying to obtain current memory policy. 00:03:58.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.309 EAL: Restoring previous memory policy: 4 00:03:58.309 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.309 EAL: request: mp_malloc_sync 00:03:58.309 EAL: No shared files mode enabled, IPC is disabled 00:03:58.309 EAL: Heap on socket 0 was expanded by 1026MB 00:03:58.309 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.568 EAL: request: mp_malloc_sync 00:03:58.568 EAL: No shared files mode enabled, IPC is disabled 00:03:58.568 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:58.568 passed 00:03:58.568 00:03:58.568 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.568 suites 1 1 n/a 0 0 00:03:58.568 tests 2 2 2 0 0 00:03:58.568 asserts 497 497 497 0 n/a 00:03:58.568 00:03:58.568 Elapsed time = 0.974 seconds 00:03:58.568 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.568 EAL: request: mp_malloc_sync 00:03:58.568 EAL: No shared files mode enabled, IPC is disabled 00:03:58.568 EAL: Heap on socket 0 was shrunk by 2MB 00:03:58.568 EAL: No shared files mode enabled, IPC is disabled 00:03:58.568 EAL: No shared files mode enabled, IPC is disabled 00:03:58.568 EAL: No shared files mode enabled, IPC is disabled 00:03:58.568 00:03:58.568 real 0m1.107s 00:03:58.568 user 0m0.644s 00:03:58.568 sys 0m0.434s 00:03:58.568 17:14:24 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.568 17:14:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:58.568 ************************************ 00:03:58.568 END TEST env_vtophys 00:03:58.568 ************************************ 00:03:58.568 17:14:24 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:58.568 17:14:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.568 17:14:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.568 17:14:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.568 ************************************ 00:03:58.568 START TEST env_pci 00:03:58.568 ************************************ 00:03:58.568 17:14:25 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:58.568 00:03:58.568 00:03:58.568 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.568 http://cunit.sourceforge.net/ 00:03:58.568 00:03:58.568 00:03:58.568 Suite: pci 00:03:58.568 Test: pci_hook ...[2024-12-09 17:14:25.037271] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1705866 has claimed it 00:03:58.568 EAL: Cannot find device (10000:00:01.0) 00:03:58.568 EAL: Failed to attach device on primary process 00:03:58.568 passed 00:03:58.568 00:03:58.568 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.568 suites 1 1 n/a 0 0 00:03:58.568 tests 1 1 1 0 0 00:03:58.568 asserts 25 25 25 0 n/a 00:03:58.568 00:03:58.568 Elapsed time = 0.028 seconds 00:03:58.568 00:03:58.568 real 0m0.049s 00:03:58.568 user 0m0.017s 00:03:58.568 sys 0m0.032s 00:03:58.568 17:14:25 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.568 17:14:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:58.568 ************************************ 00:03:58.568 END TEST env_pci 00:03:58.568 ************************************ 00:03:58.568 17:14:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:58.568 17:14:25 env -- env/env.sh@15 -- # uname 00:03:58.568 17:14:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:58.827 17:14:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:58.827 17:14:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.827 17:14:25 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:58.827 17:14:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.827 17:14:25 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.827 ************************************ 00:03:58.827 START TEST env_dpdk_post_init 00:03:58.827 ************************************ 00:03:58.827 17:14:25 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.827 EAL: Detected CPU lcores: 96 00:03:58.827 EAL: Detected NUMA nodes: 2 00:03:58.827 EAL: Detected shared linkage of DPDK 00:03:58.827 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.827 EAL: Selected IOVA mode 'VA' 00:03:58.827 EAL: VFIO support initialized 00:03:58.827 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:58.827 EAL: Using IOMMU type 1 (Type 1) 00:03:58.827 EAL: Ignore mapping IO port bar(1) 00:03:58.827 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:58.827 EAL: Ignore mapping IO port bar(1) 00:03:58.827 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:58.827 EAL: Ignore mapping IO port bar(1) 00:03:58.827 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:58.827 EAL: Ignore mapping IO port bar(1) 00:03:58.827 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:58.827 EAL: Ignore mapping IO port bar(1) 00:03:58.827 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:58.827 EAL: Ignore mapping IO port bar(1) 00:03:58.827 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:58.827 EAL: Ignore mapping IO port bar(1) 00:03:58.827 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:59.085 EAL: Ignore mapping IO port bar(1) 00:03:59.085 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:59.654 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:59.654 EAL: Ignore mapping IO port bar(1) 00:03:59.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:59.654 EAL: Ignore mapping IO port bar(1) 00:03:59.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:59.654 EAL: Ignore mapping IO port bar(1) 00:03:59.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:59.654 EAL: Ignore mapping IO port bar(1) 00:03:59.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:59.654 EAL: Ignore mapping IO port bar(1) 00:03:59.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:59.654 EAL: Ignore mapping IO port bar(1) 00:03:59.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:59.654 EAL: Ignore mapping IO port bar(1) 00:03:59.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:59.912 EAL: Ignore mapping IO port bar(1) 00:03:59.912 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:03.197 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:03.197 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:03.197 Starting DPDK initialization... 00:04:03.197 Starting SPDK post initialization... 00:04:03.197 SPDK NVMe probe 00:04:03.197 Attaching to 0000:5e:00.0 00:04:03.197 Attached to 0000:5e:00.0 00:04:03.197 Cleaning up... 00:04:03.197 00:04:03.197 real 0m4.361s 00:04:03.197 user 0m2.988s 00:04:03.197 sys 0m0.448s 00:04:03.197 17:14:29 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.197 17:14:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:03.197 ************************************ 00:04:03.197 END TEST env_dpdk_post_init 00:04:03.197 ************************************ 00:04:03.197 17:14:29 env -- env/env.sh@26 -- # uname 00:04:03.197 17:14:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:03.197 17:14:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:03.197 17:14:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.197 17:14:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.197 17:14:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.197 ************************************ 00:04:03.197 START TEST env_mem_callbacks 00:04:03.197 ************************************ 00:04:03.197 17:14:29 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:03.197 EAL: Detected CPU lcores: 96 00:04:03.197 EAL: Detected NUMA nodes: 2 00:04:03.197 EAL: Detected shared linkage of DPDK 00:04:03.197 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:03.197 EAL: Selected IOVA mode 'VA' 00:04:03.197 EAL: VFIO support initialized 00:04:03.198 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:03.198 00:04:03.198 00:04:03.198 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.198 http://cunit.sourceforge.net/ 00:04:03.198 00:04:03.198 00:04:03.198 Suite: memory 00:04:03.198 Test: test ... 00:04:03.198 register 0x200000200000 2097152 00:04:03.198 malloc 3145728 00:04:03.198 register 0x200000400000 4194304 00:04:03.198 buf 0x200000500000 len 3145728 PASSED 00:04:03.198 malloc 64 00:04:03.198 buf 0x2000004fff40 len 64 PASSED 00:04:03.198 malloc 4194304 00:04:03.198 register 0x200000800000 6291456 00:04:03.198 buf 0x200000a00000 len 4194304 PASSED 00:04:03.198 free 0x200000500000 3145728 00:04:03.198 free 0x2000004fff40 64 00:04:03.198 unregister 0x200000400000 4194304 PASSED 00:04:03.198 free 0x200000a00000 4194304 00:04:03.198 unregister 0x200000800000 6291456 PASSED 00:04:03.198 malloc 8388608 00:04:03.198 register 0x200000400000 10485760 00:04:03.198 buf 0x200000600000 len 8388608 PASSED 00:04:03.198 free 0x200000600000 8388608 00:04:03.198 unregister 0x200000400000 10485760 PASSED 00:04:03.198 passed 00:04:03.198 00:04:03.198 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.198 suites 1 1 n/a 0 0 00:04:03.198 tests 1 1 1 0 0 00:04:03.198 asserts 15 15 15 0 n/a 00:04:03.198 00:04:03.198 Elapsed time = 0.008 seconds 00:04:03.198 00:04:03.198 real 0m0.059s 00:04:03.198 user 0m0.017s 00:04:03.198 sys 0m0.042s 00:04:03.198 17:14:29 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.198 17:14:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:03.198 ************************************ 00:04:03.198 END TEST env_mem_callbacks 00:04:03.198 ************************************ 00:04:03.198 00:04:03.198 real 0m6.266s 00:04:03.198 user 0m4.046s 00:04:03.198 sys 0m1.301s 00:04:03.198 17:14:29 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.198 17:14:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.198 ************************************ 00:04:03.198 END TEST env 00:04:03.198 ************************************ 00:04:03.198 17:14:29 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:03.198 17:14:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.198 17:14:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.198 17:14:29 -- common/autotest_common.sh@10 -- # set +x 00:04:03.457 ************************************ 00:04:03.457 START TEST rpc 00:04:03.457 ************************************ 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:03.457 * Looking for test storage... 00:04:03.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:03.457 17:14:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.457 17:14:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.457 17:14:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.457 17:14:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.457 17:14:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.457 17:14:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.457 17:14:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.457 17:14:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.457 17:14:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.457 17:14:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.457 17:14:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.457 17:14:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:03.457 17:14:29 rpc -- scripts/common.sh@345 -- # : 1 00:04:03.457 17:14:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.457 17:14:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.457 17:14:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:03.457 17:14:29 rpc -- scripts/common.sh@353 -- # local d=1 00:04:03.457 17:14:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.457 17:14:29 rpc -- scripts/common.sh@355 -- # echo 1 00:04:03.457 17:14:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.457 17:14:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:03.457 17:14:29 rpc -- scripts/common.sh@353 -- # local d=2 00:04:03.457 17:14:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.457 17:14:29 rpc -- scripts/common.sh@355 -- # echo 2 00:04:03.457 17:14:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.457 17:14:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.457 17:14:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.457 17:14:29 rpc -- scripts/common.sh@368 -- # return 0 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:03.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.457 --rc genhtml_branch_coverage=1 00:04:03.457 --rc genhtml_function_coverage=1 00:04:03.457 --rc genhtml_legend=1 00:04:03.457 --rc geninfo_all_blocks=1 00:04:03.457 --rc geninfo_unexecuted_blocks=1 00:04:03.457 00:04:03.457 ' 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:03.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.457 --rc genhtml_branch_coverage=1 00:04:03.457 --rc genhtml_function_coverage=1 00:04:03.457 --rc genhtml_legend=1 00:04:03.457 --rc geninfo_all_blocks=1 00:04:03.457 --rc geninfo_unexecuted_blocks=1 00:04:03.457 00:04:03.457 ' 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:03.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.457 --rc genhtml_branch_coverage=1 00:04:03.457 --rc genhtml_function_coverage=1 00:04:03.457 --rc genhtml_legend=1 00:04:03.457 --rc geninfo_all_blocks=1 00:04:03.457 --rc geninfo_unexecuted_blocks=1 00:04:03.457 00:04:03.457 ' 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:03.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.457 --rc genhtml_branch_coverage=1 00:04:03.457 --rc genhtml_function_coverage=1 00:04:03.457 --rc genhtml_legend=1 00:04:03.457 --rc geninfo_all_blocks=1 00:04:03.457 --rc geninfo_unexecuted_blocks=1 00:04:03.457 00:04:03.457 ' 00:04:03.457 17:14:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1706679 00:04:03.457 17:14:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.457 17:14:29 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:03.457 17:14:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1706679 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@835 -- # '[' -z 1706679 ']' 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.457 17:14:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.457 [2024-12-09 17:14:29.976892] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:03.458 [2024-12-09 17:14:29.976936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706679 ] 00:04:03.716 [2024-12-09 17:14:30.051264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.716 [2024-12-09 17:14:30.093513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:03.716 [2024-12-09 17:14:30.093554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1706679' to capture a snapshot of events at runtime. 00:04:03.716 [2024-12-09 17:14:30.093562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:03.716 [2024-12-09 17:14:30.093568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:03.716 [2024-12-09 17:14:30.093573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1706679 for offline analysis/debug. 00:04:03.716 [2024-12-09 17:14:30.094055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.283 17:14:30 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:04.283 17:14:30 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:04.283 17:14:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:04.283 17:14:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:04.283 17:14:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:04.283 17:14:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:04.283 17:14:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.283 17:14:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.283 17:14:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.541 ************************************ 00:04:04.541 START TEST rpc_integrity 00:04:04.541 ************************************ 00:04:04.541 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:04.541 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:04.541 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.541 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.541 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.541 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:04.541 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:04.541 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:04.541 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:04.541 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.541 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.541 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.541 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:04.541 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:04.541 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.541 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.541 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.541 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:04.541 { 00:04:04.541 "name": "Malloc0", 00:04:04.541 "aliases": [ 00:04:04.541 "1c849f44-3cf0-400e-97d2-84d79768c38e" 00:04:04.541 ], 00:04:04.542 "product_name": "Malloc disk", 00:04:04.542 "block_size": 512, 00:04:04.542 "num_blocks": 16384, 00:04:04.542 "uuid": "1c849f44-3cf0-400e-97d2-84d79768c38e", 00:04:04.542 "assigned_rate_limits": { 00:04:04.542 "rw_ios_per_sec": 0, 00:04:04.542 "rw_mbytes_per_sec": 0, 00:04:04.542 "r_mbytes_per_sec": 0, 00:04:04.542 "w_mbytes_per_sec": 0 00:04:04.542 }, 00:04:04.542 "claimed": false, 00:04:04.542 "zoned": false, 00:04:04.542 "supported_io_types": { 00:04:04.542 "read": true, 00:04:04.542 "write": true, 00:04:04.542 "unmap": true, 00:04:04.542 "flush": true, 00:04:04.542 "reset": true, 00:04:04.542 "nvme_admin": false, 00:04:04.542 "nvme_io": false, 00:04:04.542 "nvme_io_md": false, 00:04:04.542 "write_zeroes": true, 00:04:04.542 "zcopy": true, 00:04:04.542 "get_zone_info": false, 00:04:04.542 "zone_management": false, 00:04:04.542 "zone_append": false, 00:04:04.542 "compare": false, 00:04:04.542 "compare_and_write": false, 00:04:04.542 "abort": true, 00:04:04.542 "seek_hole": false, 00:04:04.542 "seek_data": false, 00:04:04.542 "copy": true, 00:04:04.542 "nvme_iov_md": false 00:04:04.542 }, 00:04:04.542 "memory_domains": [ 00:04:04.542 { 00:04:04.542 "dma_device_id": "system", 00:04:04.542 "dma_device_type": 1 00:04:04.542 }, 00:04:04.542 { 00:04:04.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.542 "dma_device_type": 2 00:04:04.542 } 00:04:04.542 ], 00:04:04.542 "driver_specific": {} 00:04:04.542 } 00:04:04.542 ]' 00:04:04.542 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:04.542 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:04.542 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:04.542 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.542 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.542 [2024-12-09 17:14:30.960642] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:04.542 [2024-12-09 17:14:30.960674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:04.542 [2024-12-09 17:14:30.960685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2579700 00:04:04.542 [2024-12-09 17:14:30.960691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:04.542 [2024-12-09 17:14:30.961763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:04.542 [2024-12-09 17:14:30.961784] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:04.542 Passthru0 00:04:04.542 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.542 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:04.542 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.542 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.542 17:14:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.542 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:04.542 { 00:04:04.542 "name": "Malloc0", 00:04:04.542 "aliases": [ 00:04:04.542 "1c849f44-3cf0-400e-97d2-84d79768c38e" 00:04:04.542 ], 00:04:04.542 "product_name": "Malloc disk", 00:04:04.542 "block_size": 512, 00:04:04.542 "num_blocks": 16384, 00:04:04.542 "uuid": "1c849f44-3cf0-400e-97d2-84d79768c38e", 00:04:04.542 "assigned_rate_limits": { 00:04:04.542 "rw_ios_per_sec": 0, 00:04:04.542 "rw_mbytes_per_sec": 0, 00:04:04.542 "r_mbytes_per_sec": 0, 00:04:04.542 "w_mbytes_per_sec": 0 00:04:04.542 }, 00:04:04.542 "claimed": true, 00:04:04.542 "claim_type": "exclusive_write", 00:04:04.542 "zoned": false, 00:04:04.542 "supported_io_types": { 00:04:04.542 "read": true, 00:04:04.542 "write": true, 00:04:04.542 "unmap": true, 00:04:04.542 "flush": true, 00:04:04.542 "reset": true, 00:04:04.542 "nvme_admin": false, 00:04:04.542 "nvme_io": false, 00:04:04.542 "nvme_io_md": false, 00:04:04.542 "write_zeroes": true, 00:04:04.542 "zcopy": true, 00:04:04.542 "get_zone_info": false, 00:04:04.542 "zone_management": false, 00:04:04.542 "zone_append": false, 00:04:04.542 "compare": false, 00:04:04.542 "compare_and_write": false, 00:04:04.542 "abort": true, 00:04:04.542 "seek_hole": false, 00:04:04.542 "seek_data": false, 00:04:04.542 "copy": true, 00:04:04.542 "nvme_iov_md": false 00:04:04.542 }, 00:04:04.542 "memory_domains": [ 00:04:04.542 { 00:04:04.542 "dma_device_id": "system", 00:04:04.542 "dma_device_type": 1 00:04:04.542 }, 00:04:04.542 { 00:04:04.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.542 "dma_device_type": 2 00:04:04.542 } 00:04:04.542 ], 00:04:04.542 "driver_specific": {} 00:04:04.542 }, 00:04:04.542 { 00:04:04.542 "name": "Passthru0", 00:04:04.542 "aliases": [ 00:04:04.542 "fc0f7691-4c46-557b-80b4-0abc674d113a" 00:04:04.542 ], 00:04:04.542 "product_name": "passthru", 00:04:04.542 "block_size": 512, 00:04:04.542 "num_blocks": 16384, 00:04:04.542 "uuid": "fc0f7691-4c46-557b-80b4-0abc674d113a", 00:04:04.542 "assigned_rate_limits": { 00:04:04.542 "rw_ios_per_sec": 0, 00:04:04.542 "rw_mbytes_per_sec": 0, 00:04:04.542 "r_mbytes_per_sec": 0, 00:04:04.542 "w_mbytes_per_sec": 0 00:04:04.542 }, 00:04:04.542 "claimed": false, 00:04:04.542 "zoned": false, 00:04:04.542 "supported_io_types": { 00:04:04.542 "read": true, 00:04:04.542 "write": true, 00:04:04.542 "unmap": true, 00:04:04.542 "flush": true, 00:04:04.542 "reset": true, 00:04:04.542 "nvme_admin": false, 00:04:04.542 "nvme_io": false, 00:04:04.542 "nvme_io_md": false, 00:04:04.542 "write_zeroes": true, 00:04:04.542 "zcopy": true, 00:04:04.542 "get_zone_info": false, 00:04:04.542 "zone_management": false, 00:04:04.542 "zone_append": false, 00:04:04.542 "compare": false, 00:04:04.542 "compare_and_write": false, 00:04:04.542 "abort": true, 00:04:04.542 "seek_hole": false, 00:04:04.542 "seek_data": false, 00:04:04.542 "copy": true, 00:04:04.542 "nvme_iov_md": false 00:04:04.542 }, 00:04:04.542 "memory_domains": [ 00:04:04.542 { 00:04:04.542 "dma_device_id": "system", 00:04:04.542 "dma_device_type": 1 00:04:04.542 }, 00:04:04.542 { 00:04:04.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.542 "dma_device_type": 2 00:04:04.542 } 00:04:04.542 ], 00:04:04.542 "driver_specific": { 00:04:04.542 "passthru": { 00:04:04.542 "name": "Passthru0", 00:04:04.542 "base_bdev_name": "Malloc0" 00:04:04.542 } 00:04:04.542 } 00:04:04.542 } 00:04:04.542 ]' 00:04:04.542 17:14:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:04.542 17:14:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:04.542 17:14:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:04.542 17:14:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.542 17:14:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.542 17:14:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.542 17:14:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:04.542 17:14:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.542 17:14:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.542 17:14:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.542 17:14:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:04.542 17:14:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.542 17:14:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.542 17:14:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.542 17:14:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:04.542 17:14:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:04.801 17:14:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:04.801 00:04:04.801 real 0m0.265s 00:04:04.801 user 0m0.174s 00:04:04.801 sys 0m0.030s 00:04:04.801 17:14:31 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.801 17:14:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.801 ************************************ 00:04:04.801 END TEST rpc_integrity 00:04:04.801 ************************************ 00:04:04.801 17:14:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:04.801 17:14:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.801 17:14:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.801 17:14:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.801 ************************************ 00:04:04.801 START TEST rpc_plugins 00:04:04.801 ************************************ 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:04.801 17:14:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.801 17:14:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:04.801 17:14:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.801 17:14:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:04.801 { 00:04:04.801 "name": "Malloc1", 00:04:04.801 "aliases": [ 00:04:04.801 "dfa788a6-adfe-45db-959f-25be2da9e4db" 00:04:04.801 ], 00:04:04.801 "product_name": "Malloc disk", 00:04:04.801 "block_size": 4096, 00:04:04.801 "num_blocks": 256, 00:04:04.801 "uuid": "dfa788a6-adfe-45db-959f-25be2da9e4db", 00:04:04.801 "assigned_rate_limits": { 00:04:04.801 "rw_ios_per_sec": 0, 00:04:04.801 "rw_mbytes_per_sec": 0, 00:04:04.801 "r_mbytes_per_sec": 0, 00:04:04.801 "w_mbytes_per_sec": 0 00:04:04.801 }, 00:04:04.801 "claimed": false, 00:04:04.801 "zoned": false, 00:04:04.801 "supported_io_types": { 00:04:04.801 "read": true, 00:04:04.801 "write": true, 00:04:04.801 "unmap": true, 00:04:04.801 "flush": true, 00:04:04.801 "reset": true, 00:04:04.801 "nvme_admin": false, 00:04:04.801 "nvme_io": false, 00:04:04.801 "nvme_io_md": false, 00:04:04.801 "write_zeroes": true, 00:04:04.801 "zcopy": true, 00:04:04.801 "get_zone_info": false, 00:04:04.801 "zone_management": false, 00:04:04.801 "zone_append": false, 00:04:04.801 "compare": false, 00:04:04.801 "compare_and_write": false, 00:04:04.801 "abort": true, 00:04:04.801 "seek_hole": false, 00:04:04.801 "seek_data": false, 00:04:04.801 "copy": true, 00:04:04.801 "nvme_iov_md": false 00:04:04.801 }, 00:04:04.801 "memory_domains": [ 00:04:04.801 { 00:04:04.801 "dma_device_id": "system", 00:04:04.801 "dma_device_type": 1 00:04:04.801 }, 00:04:04.801 { 00:04:04.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.801 "dma_device_type": 2 00:04:04.801 } 00:04:04.801 ], 00:04:04.801 "driver_specific": {} 00:04:04.801 } 00:04:04.801 ]' 00:04:04.801 17:14:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:04.801 17:14:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:04.801 17:14:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.801 17:14:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.801 17:14:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:04.801 17:14:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:04.801 17:14:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:04.801 00:04:04.801 real 0m0.144s 00:04:04.801 user 0m0.087s 00:04:04.801 sys 0m0.017s 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.801 17:14:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.801 ************************************ 00:04:04.801 END TEST rpc_plugins 00:04:04.801 ************************************ 00:04:05.059 17:14:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:05.059 17:14:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.059 17:14:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.059 17:14:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.059 ************************************ 00:04:05.059 START TEST rpc_trace_cmd_test 00:04:05.059 ************************************ 00:04:05.059 17:14:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:05.059 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:05.059 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:05.059 17:14:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.059 17:14:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:05.059 17:14:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.059 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:05.059 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1706679", 00:04:05.059 "tpoint_group_mask": "0x8", 00:04:05.059 "iscsi_conn": { 00:04:05.059 "mask": "0x2", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "scsi": { 00:04:05.059 "mask": "0x4", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "bdev": { 00:04:05.059 "mask": "0x8", 00:04:05.059 "tpoint_mask": "0xffffffffffffffff" 00:04:05.059 }, 00:04:05.059 "nvmf_rdma": { 00:04:05.059 "mask": "0x10", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "nvmf_tcp": { 00:04:05.059 "mask": "0x20", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "ftl": { 00:04:05.059 "mask": "0x40", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "blobfs": { 00:04:05.059 "mask": "0x80", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "dsa": { 00:04:05.059 "mask": "0x200", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "thread": { 00:04:05.059 "mask": "0x400", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "nvme_pcie": { 00:04:05.059 "mask": "0x800", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "iaa": { 00:04:05.059 "mask": "0x1000", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "nvme_tcp": { 00:04:05.059 "mask": "0x2000", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "bdev_nvme": { 00:04:05.059 "mask": "0x4000", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "sock": { 00:04:05.059 "mask": "0x8000", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "blob": { 00:04:05.059 "mask": "0x10000", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "bdev_raid": { 00:04:05.059 "mask": "0x20000", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 }, 00:04:05.059 "scheduler": { 00:04:05.059 "mask": "0x40000", 00:04:05.059 "tpoint_mask": "0x0" 00:04:05.059 } 00:04:05.059 }' 00:04:05.059 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:05.059 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:05.060 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:05.060 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:05.060 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:05.060 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:05.060 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:05.060 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:05.060 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:05.060 17:14:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:05.060 00:04:05.060 real 0m0.208s 00:04:05.060 user 0m0.171s 00:04:05.060 sys 0m0.028s 00:04:05.060 17:14:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.060 17:14:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:05.060 ************************************ 00:04:05.060 END TEST rpc_trace_cmd_test 00:04:05.060 ************************************ 00:04:05.318 17:14:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:05.318 17:14:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:05.318 17:14:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:05.318 17:14:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.318 17:14:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.318 17:14:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.319 ************************************ 00:04:05.319 START TEST rpc_daemon_integrity 00:04:05.319 ************************************ 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:05.319 { 00:04:05.319 "name": "Malloc2", 00:04:05.319 "aliases": [ 00:04:05.319 "19094b8b-7086-4602-ae1c-820949a9ed4d" 00:04:05.319 ], 00:04:05.319 "product_name": "Malloc disk", 00:04:05.319 "block_size": 512, 00:04:05.319 "num_blocks": 16384, 00:04:05.319 "uuid": "19094b8b-7086-4602-ae1c-820949a9ed4d", 00:04:05.319 "assigned_rate_limits": { 00:04:05.319 "rw_ios_per_sec": 0, 00:04:05.319 "rw_mbytes_per_sec": 0, 00:04:05.319 "r_mbytes_per_sec": 0, 00:04:05.319 "w_mbytes_per_sec": 0 00:04:05.319 }, 00:04:05.319 "claimed": false, 00:04:05.319 "zoned": false, 00:04:05.319 "supported_io_types": { 00:04:05.319 "read": true, 00:04:05.319 "write": true, 00:04:05.319 "unmap": true, 00:04:05.319 "flush": true, 00:04:05.319 "reset": true, 00:04:05.319 "nvme_admin": false, 00:04:05.319 "nvme_io": false, 00:04:05.319 "nvme_io_md": false, 00:04:05.319 "write_zeroes": true, 00:04:05.319 "zcopy": true, 00:04:05.319 "get_zone_info": false, 00:04:05.319 "zone_management": false, 00:04:05.319 "zone_append": false, 00:04:05.319 "compare": false, 00:04:05.319 "compare_and_write": false, 00:04:05.319 "abort": true, 00:04:05.319 "seek_hole": false, 00:04:05.319 "seek_data": false, 00:04:05.319 "copy": true, 00:04:05.319 "nvme_iov_md": false 00:04:05.319 }, 00:04:05.319 "memory_domains": [ 00:04:05.319 { 00:04:05.319 "dma_device_id": "system", 00:04:05.319 "dma_device_type": 1 00:04:05.319 }, 00:04:05.319 { 00:04:05.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.319 "dma_device_type": 2 00:04:05.319 } 00:04:05.319 ], 00:04:05.319 "driver_specific": {} 00:04:05.319 } 00:04:05.319 ]' 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.319 [2024-12-09 17:14:31.786868] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:05.319 [2024-12-09 17:14:31.786897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:05.319 [2024-12-09 17:14:31.786911] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2435950 00:04:05.319 [2024-12-09 17:14:31.786917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:05.319 [2024-12-09 17:14:31.787875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:05.319 [2024-12-09 17:14:31.787897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:05.319 Passthru0 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:05.319 { 00:04:05.319 "name": "Malloc2", 00:04:05.319 "aliases": [ 00:04:05.319 "19094b8b-7086-4602-ae1c-820949a9ed4d" 00:04:05.319 ], 00:04:05.319 "product_name": "Malloc disk", 00:04:05.319 "block_size": 512, 00:04:05.319 "num_blocks": 16384, 00:04:05.319 "uuid": "19094b8b-7086-4602-ae1c-820949a9ed4d", 00:04:05.319 "assigned_rate_limits": { 00:04:05.319 "rw_ios_per_sec": 0, 00:04:05.319 "rw_mbytes_per_sec": 0, 00:04:05.319 "r_mbytes_per_sec": 0, 00:04:05.319 "w_mbytes_per_sec": 0 00:04:05.319 }, 00:04:05.319 "claimed": true, 00:04:05.319 "claim_type": "exclusive_write", 00:04:05.319 "zoned": false, 00:04:05.319 "supported_io_types": { 00:04:05.319 "read": true, 00:04:05.319 "write": true, 00:04:05.319 "unmap": true, 00:04:05.319 "flush": true, 00:04:05.319 "reset": true, 00:04:05.319 "nvme_admin": false, 00:04:05.319 "nvme_io": false, 00:04:05.319 "nvme_io_md": false, 00:04:05.319 "write_zeroes": true, 00:04:05.319 "zcopy": true, 00:04:05.319 "get_zone_info": false, 00:04:05.319 "zone_management": false, 00:04:05.319 "zone_append": false, 00:04:05.319 "compare": false, 00:04:05.319 "compare_and_write": false, 00:04:05.319 "abort": true, 00:04:05.319 "seek_hole": false, 00:04:05.319 "seek_data": false, 00:04:05.319 "copy": true, 00:04:05.319 "nvme_iov_md": false 00:04:05.319 }, 00:04:05.319 "memory_domains": [ 00:04:05.319 { 00:04:05.319 "dma_device_id": "system", 00:04:05.319 "dma_device_type": 1 00:04:05.319 }, 00:04:05.319 { 00:04:05.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.319 "dma_device_type": 2 00:04:05.319 } 00:04:05.319 ], 00:04:05.319 "driver_specific": {} 00:04:05.319 }, 00:04:05.319 { 00:04:05.319 "name": "Passthru0", 00:04:05.319 "aliases": [ 00:04:05.319 "0f5b3d86-a3ea-5883-92cb-5ebf6915c675" 00:04:05.319 ], 00:04:05.319 "product_name": "passthru", 00:04:05.319 "block_size": 512, 00:04:05.319 "num_blocks": 16384, 00:04:05.319 "uuid": "0f5b3d86-a3ea-5883-92cb-5ebf6915c675", 00:04:05.319 "assigned_rate_limits": { 00:04:05.319 "rw_ios_per_sec": 0, 00:04:05.319 "rw_mbytes_per_sec": 0, 00:04:05.319 "r_mbytes_per_sec": 0, 00:04:05.319 "w_mbytes_per_sec": 0 00:04:05.319 }, 00:04:05.319 "claimed": false, 00:04:05.319 "zoned": false, 00:04:05.319 "supported_io_types": { 00:04:05.319 "read": true, 00:04:05.319 "write": true, 00:04:05.319 "unmap": true, 00:04:05.319 "flush": true, 00:04:05.319 "reset": true, 00:04:05.319 "nvme_admin": false, 00:04:05.319 "nvme_io": false, 00:04:05.319 "nvme_io_md": false, 00:04:05.319 "write_zeroes": true, 00:04:05.319 "zcopy": true, 00:04:05.319 "get_zone_info": false, 00:04:05.319 "zone_management": false, 00:04:05.319 "zone_append": false, 00:04:05.319 "compare": false, 00:04:05.319 "compare_and_write": false, 00:04:05.319 "abort": true, 00:04:05.319 "seek_hole": false, 00:04:05.319 "seek_data": false, 00:04:05.319 "copy": true, 00:04:05.319 "nvme_iov_md": false 00:04:05.319 }, 00:04:05.319 "memory_domains": [ 00:04:05.319 { 00:04:05.319 "dma_device_id": "system", 00:04:05.319 "dma_device_type": 1 00:04:05.319 }, 00:04:05.319 { 00:04:05.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.319 "dma_device_type": 2 00:04:05.319 } 00:04:05.319 ], 00:04:05.319 "driver_specific": { 00:04:05.319 "passthru": { 00:04:05.319 "name": "Passthru0", 00:04:05.319 "base_bdev_name": "Malloc2" 00:04:05.319 } 00:04:05.319 } 00:04:05.319 } 00:04:05.319 ]' 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:05.319 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:05.578 00:04:05.578 real 0m0.277s 00:04:05.578 user 0m0.176s 00:04:05.578 sys 0m0.037s 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.578 17:14:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.578 ************************************ 00:04:05.578 END TEST rpc_daemon_integrity 00:04:05.578 ************************************ 00:04:05.578 17:14:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:05.578 17:14:31 rpc -- rpc/rpc.sh@84 -- # killprocess 1706679 00:04:05.578 17:14:31 rpc -- common/autotest_common.sh@954 -- # '[' -z 1706679 ']' 00:04:05.578 17:14:31 rpc -- common/autotest_common.sh@958 -- # kill -0 1706679 00:04:05.578 17:14:31 rpc -- common/autotest_common.sh@959 -- # uname 00:04:05.578 17:14:31 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.578 17:14:31 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706679 00:04:05.578 17:14:32 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:05.578 17:14:32 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:05.578 17:14:32 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706679' 00:04:05.578 killing process with pid 1706679 00:04:05.578 17:14:32 rpc -- common/autotest_common.sh@973 -- # kill 1706679 00:04:05.578 17:14:32 rpc -- common/autotest_common.sh@978 -- # wait 1706679 00:04:05.837 00:04:05.837 real 0m2.556s 00:04:05.837 user 0m3.254s 00:04:05.837 sys 0m0.707s 00:04:05.837 17:14:32 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.837 17:14:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.837 ************************************ 00:04:05.837 END TEST rpc 00:04:05.837 ************************************ 00:04:05.837 17:14:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:05.837 17:14:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.837 17:14:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.837 17:14:32 -- common/autotest_common.sh@10 -- # set +x 00:04:05.837 ************************************ 00:04:05.838 START TEST skip_rpc 00:04:05.838 ************************************ 00:04:05.838 17:14:32 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:06.097 * Looking for test storage... 00:04:06.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.097 17:14:32 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:06.097 17:14:32 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:06.097 17:14:32 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:06.097 17:14:32 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.097 17:14:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:06.097 17:14:32 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.097 17:14:32 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:06.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.097 --rc genhtml_branch_coverage=1 00:04:06.097 --rc genhtml_function_coverage=1 00:04:06.097 --rc genhtml_legend=1 00:04:06.097 --rc geninfo_all_blocks=1 00:04:06.097 --rc geninfo_unexecuted_blocks=1 00:04:06.097 00:04:06.097 ' 00:04:06.097 17:14:32 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:06.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.097 --rc genhtml_branch_coverage=1 00:04:06.097 --rc genhtml_function_coverage=1 00:04:06.097 --rc genhtml_legend=1 00:04:06.097 --rc geninfo_all_blocks=1 00:04:06.097 --rc geninfo_unexecuted_blocks=1 00:04:06.097 00:04:06.097 ' 00:04:06.097 17:14:32 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:06.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.097 --rc genhtml_branch_coverage=1 00:04:06.097 --rc genhtml_function_coverage=1 00:04:06.097 --rc genhtml_legend=1 00:04:06.097 --rc geninfo_all_blocks=1 00:04:06.097 --rc geninfo_unexecuted_blocks=1 00:04:06.097 00:04:06.097 ' 00:04:06.097 17:14:32 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:06.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.097 --rc genhtml_branch_coverage=1 00:04:06.097 --rc genhtml_function_coverage=1 00:04:06.097 --rc genhtml_legend=1 00:04:06.097 --rc geninfo_all_blocks=1 00:04:06.097 --rc geninfo_unexecuted_blocks=1 00:04:06.097 00:04:06.097 ' 00:04:06.097 17:14:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:06.097 17:14:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:06.097 17:14:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:06.097 17:14:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.097 17:14:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.097 17:14:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.097 ************************************ 00:04:06.097 START TEST skip_rpc 00:04:06.097 ************************************ 00:04:06.097 17:14:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:06.097 17:14:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1707307 00:04:06.097 17:14:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.097 17:14:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:06.097 17:14:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:06.097 [2024-12-09 17:14:32.633006] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:06.097 [2024-12-09 17:14:32.633046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707307 ] 00:04:06.357 [2024-12-09 17:14:32.706322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.357 [2024-12-09 17:14:32.744210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1707307 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1707307 ']' 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1707307 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.642 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1707307 00:04:11.643 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.643 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.643 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1707307' 00:04:11.643 killing process with pid 1707307 00:04:11.643 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1707307 00:04:11.643 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1707307 00:04:11.643 00:04:11.643 real 0m5.360s 00:04:11.643 user 0m5.112s 00:04:11.643 sys 0m0.280s 00:04:11.643 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.643 17:14:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.643 ************************************ 00:04:11.643 END TEST skip_rpc 00:04:11.643 ************************************ 00:04:11.643 17:14:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:11.643 17:14:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.643 17:14:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.643 17:14:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.643 ************************************ 00:04:11.643 START TEST skip_rpc_with_json 00:04:11.643 ************************************ 00:04:11.643 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:11.643 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:11.643 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1708235 00:04:11.643 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.643 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:11.643 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1708235 00:04:11.643 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1708235 ']' 00:04:11.643 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.643 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.643 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.643 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.643 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.643 [2024-12-09 17:14:38.060805] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:11.643 [2024-12-09 17:14:38.060847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1708235 ] 00:04:11.643 [2024-12-09 17:14:38.136013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.643 [2024-12-09 17:14:38.174765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.901 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.901 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:11.901 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:11.901 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.901 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.901 [2024-12-09 17:14:38.395498] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:11.901 request: 00:04:11.901 { 00:04:11.901 "trtype": "tcp", 00:04:11.901 "method": "nvmf_get_transports", 00:04:11.901 "req_id": 1 00:04:11.901 } 00:04:11.901 Got JSON-RPC error response 00:04:11.901 response: 00:04:11.901 { 00:04:11.901 "code": -19, 00:04:11.901 "message": "No such device" 00:04:11.901 } 00:04:11.902 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:11.902 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:11.902 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.902 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.902 [2024-12-09 17:14:38.407604] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:11.902 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.902 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:11.902 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.902 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.161 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.161 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:12.161 { 00:04:12.161 "subsystems": [ 00:04:12.161 { 00:04:12.161 "subsystem": "fsdev", 00:04:12.161 "config": [ 00:04:12.161 { 00:04:12.161 "method": "fsdev_set_opts", 00:04:12.161 "params": { 00:04:12.161 "fsdev_io_pool_size": 65535, 00:04:12.161 "fsdev_io_cache_size": 256 00:04:12.161 } 00:04:12.161 } 00:04:12.161 ] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "vfio_user_target", 00:04:12.161 "config": null 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "keyring", 00:04:12.161 "config": [] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "iobuf", 00:04:12.161 "config": [ 00:04:12.161 { 00:04:12.161 "method": "iobuf_set_options", 00:04:12.161 "params": { 00:04:12.161 "small_pool_count": 8192, 00:04:12.161 "large_pool_count": 1024, 00:04:12.161 "small_bufsize": 8192, 00:04:12.161 "large_bufsize": 135168, 00:04:12.161 "enable_numa": false 00:04:12.161 } 00:04:12.161 } 00:04:12.161 ] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "sock", 00:04:12.161 "config": [ 00:04:12.161 { 00:04:12.161 "method": "sock_set_default_impl", 00:04:12.161 "params": { 00:04:12.161 "impl_name": "posix" 00:04:12.161 } 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "method": "sock_impl_set_options", 00:04:12.161 "params": { 00:04:12.161 "impl_name": "ssl", 00:04:12.161 "recv_buf_size": 4096, 00:04:12.161 "send_buf_size": 4096, 00:04:12.161 "enable_recv_pipe": true, 00:04:12.161 "enable_quickack": false, 00:04:12.161 "enable_placement_id": 0, 00:04:12.161 "enable_zerocopy_send_server": true, 00:04:12.161 "enable_zerocopy_send_client": false, 00:04:12.161 "zerocopy_threshold": 0, 00:04:12.161 "tls_version": 0, 00:04:12.161 "enable_ktls": false 00:04:12.161 } 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "method": "sock_impl_set_options", 00:04:12.161 "params": { 00:04:12.161 "impl_name": "posix", 00:04:12.161 "recv_buf_size": 2097152, 00:04:12.161 "send_buf_size": 2097152, 00:04:12.161 "enable_recv_pipe": true, 00:04:12.161 "enable_quickack": false, 00:04:12.161 "enable_placement_id": 0, 00:04:12.161 "enable_zerocopy_send_server": true, 00:04:12.161 "enable_zerocopy_send_client": false, 00:04:12.161 "zerocopy_threshold": 0, 00:04:12.161 "tls_version": 0, 00:04:12.161 "enable_ktls": false 00:04:12.161 } 00:04:12.161 } 00:04:12.161 ] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "vmd", 00:04:12.161 "config": [] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "accel", 00:04:12.161 "config": [ 00:04:12.161 { 00:04:12.161 "method": "accel_set_options", 00:04:12.161 "params": { 00:04:12.161 "small_cache_size": 128, 00:04:12.161 "large_cache_size": 16, 00:04:12.161 "task_count": 2048, 00:04:12.161 "sequence_count": 2048, 00:04:12.161 "buf_count": 2048 00:04:12.161 } 00:04:12.161 } 00:04:12.161 ] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "bdev", 00:04:12.161 "config": [ 00:04:12.161 { 00:04:12.161 "method": "bdev_set_options", 00:04:12.161 "params": { 00:04:12.161 "bdev_io_pool_size": 65535, 00:04:12.161 "bdev_io_cache_size": 256, 00:04:12.161 "bdev_auto_examine": true, 00:04:12.161 "iobuf_small_cache_size": 128, 00:04:12.161 "iobuf_large_cache_size": 16 00:04:12.161 } 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "method": "bdev_raid_set_options", 00:04:12.161 "params": { 00:04:12.161 "process_window_size_kb": 1024, 00:04:12.161 "process_max_bandwidth_mb_sec": 0 00:04:12.161 } 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "method": "bdev_iscsi_set_options", 00:04:12.161 "params": { 00:04:12.161 "timeout_sec": 30 00:04:12.161 } 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "method": "bdev_nvme_set_options", 00:04:12.161 "params": { 00:04:12.161 "action_on_timeout": "none", 00:04:12.161 "timeout_us": 0, 00:04:12.161 "timeout_admin_us": 0, 00:04:12.161 "keep_alive_timeout_ms": 10000, 00:04:12.161 "arbitration_burst": 0, 00:04:12.161 "low_priority_weight": 0, 00:04:12.161 "medium_priority_weight": 0, 00:04:12.161 "high_priority_weight": 0, 00:04:12.161 "nvme_adminq_poll_period_us": 10000, 00:04:12.161 "nvme_ioq_poll_period_us": 0, 00:04:12.161 "io_queue_requests": 0, 00:04:12.161 "delay_cmd_submit": true, 00:04:12.161 "transport_retry_count": 4, 00:04:12.161 "bdev_retry_count": 3, 00:04:12.161 "transport_ack_timeout": 0, 00:04:12.161 "ctrlr_loss_timeout_sec": 0, 00:04:12.161 "reconnect_delay_sec": 0, 00:04:12.161 "fast_io_fail_timeout_sec": 0, 00:04:12.161 "disable_auto_failback": false, 00:04:12.161 "generate_uuids": false, 00:04:12.161 "transport_tos": 0, 00:04:12.161 "nvme_error_stat": false, 00:04:12.161 "rdma_srq_size": 0, 00:04:12.161 "io_path_stat": false, 00:04:12.161 "allow_accel_sequence": false, 00:04:12.161 "rdma_max_cq_size": 0, 00:04:12.161 "rdma_cm_event_timeout_ms": 0, 00:04:12.161 "dhchap_digests": [ 00:04:12.161 "sha256", 00:04:12.161 "sha384", 00:04:12.161 "sha512" 00:04:12.161 ], 00:04:12.161 "dhchap_dhgroups": [ 00:04:12.161 "null", 00:04:12.161 "ffdhe2048", 00:04:12.161 "ffdhe3072", 00:04:12.161 "ffdhe4096", 00:04:12.161 "ffdhe6144", 00:04:12.161 "ffdhe8192" 00:04:12.161 ] 00:04:12.161 } 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "method": "bdev_nvme_set_hotplug", 00:04:12.161 "params": { 00:04:12.161 "period_us": 100000, 00:04:12.161 "enable": false 00:04:12.161 } 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "method": "bdev_wait_for_examine" 00:04:12.161 } 00:04:12.161 ] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "scsi", 00:04:12.161 "config": null 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "scheduler", 00:04:12.161 "config": [ 00:04:12.161 { 00:04:12.161 "method": "framework_set_scheduler", 00:04:12.161 "params": { 00:04:12.161 "name": "static" 00:04:12.161 } 00:04:12.161 } 00:04:12.161 ] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "vhost_scsi", 00:04:12.161 "config": [] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "vhost_blk", 00:04:12.161 "config": [] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "ublk", 00:04:12.161 "config": [] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "nbd", 00:04:12.161 "config": [] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "nvmf", 00:04:12.161 "config": [ 00:04:12.161 { 00:04:12.161 "method": "nvmf_set_config", 00:04:12.161 "params": { 00:04:12.161 "discovery_filter": "match_any", 00:04:12.161 "admin_cmd_passthru": { 00:04:12.161 "identify_ctrlr": false 00:04:12.161 }, 00:04:12.161 "dhchap_digests": [ 00:04:12.161 "sha256", 00:04:12.161 "sha384", 00:04:12.161 "sha512" 00:04:12.161 ], 00:04:12.161 "dhchap_dhgroups": [ 00:04:12.161 "null", 00:04:12.161 "ffdhe2048", 00:04:12.161 "ffdhe3072", 00:04:12.161 "ffdhe4096", 00:04:12.161 "ffdhe6144", 00:04:12.161 "ffdhe8192" 00:04:12.161 ] 00:04:12.161 } 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "method": "nvmf_set_max_subsystems", 00:04:12.161 "params": { 00:04:12.161 "max_subsystems": 1024 00:04:12.161 } 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "method": "nvmf_set_crdt", 00:04:12.161 "params": { 00:04:12.161 "crdt1": 0, 00:04:12.161 "crdt2": 0, 00:04:12.161 "crdt3": 0 00:04:12.161 } 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "method": "nvmf_create_transport", 00:04:12.161 "params": { 00:04:12.161 "trtype": "TCP", 00:04:12.161 "max_queue_depth": 128, 00:04:12.161 "max_io_qpairs_per_ctrlr": 127, 00:04:12.161 "in_capsule_data_size": 4096, 00:04:12.161 "max_io_size": 131072, 00:04:12.161 "io_unit_size": 131072, 00:04:12.161 "max_aq_depth": 128, 00:04:12.161 "num_shared_buffers": 511, 00:04:12.161 "buf_cache_size": 4294967295, 00:04:12.161 "dif_insert_or_strip": false, 00:04:12.161 "zcopy": false, 00:04:12.161 "c2h_success": true, 00:04:12.161 "sock_priority": 0, 00:04:12.161 "abort_timeout_sec": 1, 00:04:12.161 "ack_timeout": 0, 00:04:12.161 "data_wr_pool_size": 0 00:04:12.161 } 00:04:12.161 } 00:04:12.161 ] 00:04:12.161 }, 00:04:12.161 { 00:04:12.161 "subsystem": "iscsi", 00:04:12.161 "config": [ 00:04:12.161 { 00:04:12.161 "method": "iscsi_set_options", 00:04:12.161 "params": { 00:04:12.161 "node_base": "iqn.2016-06.io.spdk", 00:04:12.161 "max_sessions": 128, 00:04:12.161 "max_connections_per_session": 2, 00:04:12.161 "max_queue_depth": 64, 00:04:12.161 "default_time2wait": 2, 00:04:12.161 "default_time2retain": 20, 00:04:12.162 "first_burst_length": 8192, 00:04:12.162 "immediate_data": true, 00:04:12.162 "allow_duplicated_isid": false, 00:04:12.162 "error_recovery_level": 0, 00:04:12.162 "nop_timeout": 60, 00:04:12.162 "nop_in_interval": 30, 00:04:12.162 "disable_chap": false, 00:04:12.162 "require_chap": false, 00:04:12.162 "mutual_chap": false, 00:04:12.162 "chap_group": 0, 00:04:12.162 "max_large_datain_per_connection": 64, 00:04:12.162 "max_r2t_per_connection": 4, 00:04:12.162 "pdu_pool_size": 36864, 00:04:12.162 "immediate_data_pool_size": 16384, 00:04:12.162 "data_out_pool_size": 2048 00:04:12.162 } 00:04:12.162 } 00:04:12.162 ] 00:04:12.162 } 00:04:12.162 ] 00:04:12.162 } 00:04:12.162 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:12.162 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1708235 00:04:12.162 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1708235 ']' 00:04:12.162 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1708235 00:04:12.162 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:12.162 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.162 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1708235 00:04:12.162 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.162 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.162 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1708235' 00:04:12.162 killing process with pid 1708235 00:04:12.162 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1708235 00:04:12.162 17:14:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1708235 00:04:12.420 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1708463 00:04:12.421 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:12.421 17:14:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:17.687 17:14:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1708463 00:04:17.687 17:14:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1708463 ']' 00:04:17.687 17:14:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1708463 00:04:17.687 17:14:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:17.687 17:14:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.687 17:14:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1708463 00:04:17.687 17:14:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.687 17:14:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.687 17:14:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1708463' 00:04:17.687 killing process with pid 1708463 00:04:17.687 17:14:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1708463 00:04:17.687 17:14:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1708463 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:17.946 00:04:17.946 real 0m6.284s 00:04:17.946 user 0m5.969s 00:04:17.946 sys 0m0.616s 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.946 ************************************ 00:04:17.946 END TEST skip_rpc_with_json 00:04:17.946 ************************************ 00:04:17.946 17:14:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:17.946 17:14:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.946 17:14:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.946 17:14:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.946 ************************************ 00:04:17.946 START TEST skip_rpc_with_delay 00:04:17.946 ************************************ 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.946 [2024-12-09 17:14:44.417342] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:17.946 00:04:17.946 real 0m0.070s 00:04:17.946 user 0m0.039s 00:04:17.946 sys 0m0.030s 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.946 17:14:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:17.946 ************************************ 00:04:17.946 END TEST skip_rpc_with_delay 00:04:17.946 ************************************ 00:04:17.946 17:14:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:17.946 17:14:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:17.946 17:14:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:17.946 17:14:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.946 17:14:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.946 17:14:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.205 ************************************ 00:04:18.205 START TEST exit_on_failed_rpc_init 00:04:18.205 ************************************ 00:04:18.205 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:18.205 17:14:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1709410 00:04:18.205 17:14:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1709410 00:04:18.205 17:14:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.205 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1709410 ']' 00:04:18.205 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.205 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.205 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.205 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.205 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.205 [2024-12-09 17:14:44.562151] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:18.205 [2024-12-09 17:14:44.562208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709410 ] 00:04:18.205 [2024-12-09 17:14:44.637191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.205 [2024-12-09 17:14:44.677583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:18.464 17:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.464 [2024-12-09 17:14:44.945642] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:18.464 [2024-12-09 17:14:44.945689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709425 ] 00:04:18.464 [2024-12-09 17:14:45.001410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.724 [2024-12-09 17:14:45.040831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.724 [2024-12-09 17:14:45.040885] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:18.724 [2024-12-09 17:14:45.040894] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:18.724 [2024-12-09 17:14:45.040900] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1709410 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1709410 ']' 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1709410 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1709410 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1709410' 00:04:18.724 killing process with pid 1709410 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1709410 00:04:18.724 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1709410 00:04:18.982 00:04:18.982 real 0m0.926s 00:04:18.982 user 0m0.985s 00:04:18.982 sys 0m0.366s 00:04:18.982 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.982 17:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.982 ************************************ 00:04:18.982 END TEST exit_on_failed_rpc_init 00:04:18.982 ************************************ 00:04:18.982 17:14:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:18.982 00:04:18.982 real 0m13.092s 00:04:18.982 user 0m12.319s 00:04:18.982 sys 0m1.561s 00:04:18.982 17:14:45 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.982 17:14:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.982 ************************************ 00:04:18.982 END TEST skip_rpc 00:04:18.982 ************************************ 00:04:18.982 17:14:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:18.982 17:14:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.982 17:14:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.982 17:14:45 -- common/autotest_common.sh@10 -- # set +x 00:04:19.241 ************************************ 00:04:19.241 START TEST rpc_client 00:04:19.241 ************************************ 00:04:19.241 17:14:45 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:19.241 * Looking for test storage... 00:04:19.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:19.241 17:14:45 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:19.241 17:14:45 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:19.241 17:14:45 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:19.241 17:14:45 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.241 17:14:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:19.241 17:14:45 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.241 17:14:45 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:19.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.241 --rc genhtml_branch_coverage=1 00:04:19.241 --rc genhtml_function_coverage=1 00:04:19.241 --rc genhtml_legend=1 00:04:19.241 --rc geninfo_all_blocks=1 00:04:19.241 --rc geninfo_unexecuted_blocks=1 00:04:19.241 00:04:19.241 ' 00:04:19.241 17:14:45 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:19.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.241 --rc genhtml_branch_coverage=1 00:04:19.241 --rc genhtml_function_coverage=1 00:04:19.241 --rc genhtml_legend=1 00:04:19.241 --rc geninfo_all_blocks=1 00:04:19.241 --rc geninfo_unexecuted_blocks=1 00:04:19.241 00:04:19.241 ' 00:04:19.241 17:14:45 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:19.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.241 --rc genhtml_branch_coverage=1 00:04:19.241 --rc genhtml_function_coverage=1 00:04:19.241 --rc genhtml_legend=1 00:04:19.241 --rc geninfo_all_blocks=1 00:04:19.241 --rc geninfo_unexecuted_blocks=1 00:04:19.241 00:04:19.241 ' 00:04:19.241 17:14:45 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:19.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.241 --rc genhtml_branch_coverage=1 00:04:19.241 --rc genhtml_function_coverage=1 00:04:19.241 --rc genhtml_legend=1 00:04:19.241 --rc geninfo_all_blocks=1 00:04:19.241 --rc geninfo_unexecuted_blocks=1 00:04:19.241 00:04:19.241 ' 00:04:19.241 17:14:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:19.241 OK 00:04:19.241 17:14:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:19.241 00:04:19.241 real 0m0.198s 00:04:19.241 user 0m0.122s 00:04:19.241 sys 0m0.090s 00:04:19.241 17:14:45 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.241 17:14:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:19.241 ************************************ 00:04:19.241 END TEST rpc_client 00:04:19.241 ************************************ 00:04:19.241 17:14:45 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:19.241 17:14:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.241 17:14:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.241 17:14:45 -- common/autotest_common.sh@10 -- # set +x 00:04:19.500 ************************************ 00:04:19.500 START TEST json_config 00:04:19.500 ************************************ 00:04:19.500 17:14:45 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:19.500 17:14:45 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:19.500 17:14:45 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:19.500 17:14:45 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:19.500 17:14:45 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:19.500 17:14:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.500 17:14:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.500 17:14:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.500 17:14:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.500 17:14:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.500 17:14:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.500 17:14:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.500 17:14:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.500 17:14:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.500 17:14:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.500 17:14:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.500 17:14:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:19.500 17:14:45 json_config -- scripts/common.sh@345 -- # : 1 00:04:19.500 17:14:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.500 17:14:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.500 17:14:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:19.500 17:14:45 json_config -- scripts/common.sh@353 -- # local d=1 00:04:19.500 17:14:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.501 17:14:45 json_config -- scripts/common.sh@355 -- # echo 1 00:04:19.501 17:14:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.501 17:14:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:19.501 17:14:45 json_config -- scripts/common.sh@353 -- # local d=2 00:04:19.501 17:14:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.501 17:14:45 json_config -- scripts/common.sh@355 -- # echo 2 00:04:19.501 17:14:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.501 17:14:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.501 17:14:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.501 17:14:45 json_config -- scripts/common.sh@368 -- # return 0 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:19.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.501 --rc genhtml_branch_coverage=1 00:04:19.501 --rc genhtml_function_coverage=1 00:04:19.501 --rc genhtml_legend=1 00:04:19.501 --rc geninfo_all_blocks=1 00:04:19.501 --rc geninfo_unexecuted_blocks=1 00:04:19.501 00:04:19.501 ' 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:19.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.501 --rc genhtml_branch_coverage=1 00:04:19.501 --rc genhtml_function_coverage=1 00:04:19.501 --rc genhtml_legend=1 00:04:19.501 --rc geninfo_all_blocks=1 00:04:19.501 --rc geninfo_unexecuted_blocks=1 00:04:19.501 00:04:19.501 ' 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:19.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.501 --rc genhtml_branch_coverage=1 00:04:19.501 --rc genhtml_function_coverage=1 00:04:19.501 --rc genhtml_legend=1 00:04:19.501 --rc geninfo_all_blocks=1 00:04:19.501 --rc geninfo_unexecuted_blocks=1 00:04:19.501 00:04:19.501 ' 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:19.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.501 --rc genhtml_branch_coverage=1 00:04:19.501 --rc genhtml_function_coverage=1 00:04:19.501 --rc genhtml_legend=1 00:04:19.501 --rc geninfo_all_blocks=1 00:04:19.501 --rc geninfo_unexecuted_blocks=1 00:04:19.501 00:04:19.501 ' 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:19.501 17:14:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:19.501 17:14:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.501 17:14:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.501 17:14:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.501 17:14:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.501 17:14:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.501 17:14:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.501 17:14:45 json_config -- paths/export.sh@5 -- # export PATH 00:04:19.501 17:14:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@51 -- # : 0 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:19.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:19.501 17:14:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:19.501 INFO: JSON configuration test init 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.501 17:14:45 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:19.501 17:14:45 json_config -- json_config/common.sh@9 -- # local app=target 00:04:19.501 17:14:45 json_config -- json_config/common.sh@10 -- # shift 00:04:19.501 17:14:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.501 17:14:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.501 17:14:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.501 17:14:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.501 17:14:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.501 17:14:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1709768 00:04:19.501 17:14:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.501 Waiting for target to run... 00:04:19.501 17:14:45 json_config -- json_config/common.sh@25 -- # waitforlisten 1709768 /var/tmp/spdk_tgt.sock 00:04:19.501 17:14:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 1709768 ']' 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.501 17:14:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.760 [2024-12-09 17:14:46.046060] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:19.760 [2024-12-09 17:14:46.046107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709768 ] 00:04:20.019 [2024-12-09 17:14:46.331067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.019 [2024-12-09 17:14:46.361557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.585 17:14:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.585 17:14:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:20.585 17:14:46 json_config -- json_config/common.sh@26 -- # echo '' 00:04:20.585 00:04:20.585 17:14:46 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:20.585 17:14:46 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:20.585 17:14:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.585 17:14:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.585 17:14:46 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:20.585 17:14:46 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:20.585 17:14:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.585 17:14:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.585 17:14:46 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:20.585 17:14:46 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:20.585 17:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:23.870 17:14:49 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:23.870 17:14:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:23.870 17:14:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.870 17:14:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.870 17:14:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:23.870 17:14:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:23.870 17:14:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:23.870 17:14:49 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:23.870 17:14:49 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:23.870 17:14:49 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:23.870 17:14:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@54 -- # sort 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:23.870 17:14:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.870 17:14:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:23.870 17:14:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.870 17:14:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:23.870 17:14:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:23.870 17:14:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:24.129 MallocForNvmf0 00:04:24.129 17:14:50 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:24.129 17:14:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:24.129 MallocForNvmf1 00:04:24.387 17:14:50 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:24.387 17:14:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:24.387 [2024-12-09 17:14:50.835164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:24.387 17:14:50 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:24.387 17:14:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:24.645 17:14:51 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:24.645 17:14:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:24.903 17:14:51 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:24.903 17:14:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:24.903 17:14:51 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:24.904 17:14:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:25.162 [2024-12-09 17:14:51.609489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:25.162 17:14:51 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:25.162 17:14:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.162 17:14:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.162 17:14:51 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:25.162 17:14:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.162 17:14:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.420 17:14:51 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:25.420 17:14:51 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:25.420 17:14:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:25.420 MallocBdevForConfigChangeCheck 00:04:25.420 17:14:51 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:25.420 17:14:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.420 17:14:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.420 17:14:51 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:25.420 17:14:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.997 17:14:52 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:25.997 INFO: shutting down applications... 00:04:25.997 17:14:52 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:25.997 17:14:52 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:25.997 17:14:52 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:25.997 17:14:52 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:27.373 Calling clear_iscsi_subsystem 00:04:27.373 Calling clear_nvmf_subsystem 00:04:27.373 Calling clear_nbd_subsystem 00:04:27.373 Calling clear_ublk_subsystem 00:04:27.373 Calling clear_vhost_blk_subsystem 00:04:27.373 Calling clear_vhost_scsi_subsystem 00:04:27.373 Calling clear_bdev_subsystem 00:04:27.373 17:14:53 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:27.373 17:14:53 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:27.373 17:14:53 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:27.373 17:14:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:27.373 17:14:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:27.373 17:14:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:27.941 17:14:54 json_config -- json_config/json_config.sh@352 -- # break 00:04:27.941 17:14:54 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:27.941 17:14:54 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:27.941 17:14:54 json_config -- json_config/common.sh@31 -- # local app=target 00:04:27.941 17:14:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:27.941 17:14:54 json_config -- json_config/common.sh@35 -- # [[ -n 1709768 ]] 00:04:27.941 17:14:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1709768 00:04:27.941 17:14:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:27.941 17:14:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.941 17:14:54 json_config -- json_config/common.sh@41 -- # kill -0 1709768 00:04:27.941 17:14:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:28.510 17:14:54 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:28.510 17:14:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.510 17:14:54 json_config -- json_config/common.sh@41 -- # kill -0 1709768 00:04:28.510 17:14:54 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:28.510 17:14:54 json_config -- json_config/common.sh@43 -- # break 00:04:28.510 17:14:54 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:28.510 17:14:54 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:28.510 SPDK target shutdown done 00:04:28.510 17:14:54 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:28.510 INFO: relaunching applications... 00:04:28.510 17:14:54 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.510 17:14:54 json_config -- json_config/common.sh@9 -- # local app=target 00:04:28.510 17:14:54 json_config -- json_config/common.sh@10 -- # shift 00:04:28.510 17:14:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.510 17:14:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.510 17:14:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.510 17:14:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.510 17:14:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.510 17:14:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1711453 00:04:28.510 17:14:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.510 Waiting for target to run... 00:04:28.510 17:14:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.510 17:14:54 json_config -- json_config/common.sh@25 -- # waitforlisten 1711453 /var/tmp/spdk_tgt.sock 00:04:28.510 17:14:54 json_config -- common/autotest_common.sh@835 -- # '[' -z 1711453 ']' 00:04:28.510 17:14:54 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.510 17:14:54 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.510 17:14:54 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.510 17:14:54 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.510 17:14:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.510 [2024-12-09 17:14:54.802790] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:28.510 [2024-12-09 17:14:54.802847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711453 ] 00:04:28.768 [2024-12-09 17:14:55.261354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.027 [2024-12-09 17:14:55.314370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.313 [2024-12-09 17:14:58.340979] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.313 [2024-12-09 17:14:58.373265] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:32.571 17:14:59 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.571 17:14:59 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:32.571 17:14:59 json_config -- json_config/common.sh@26 -- # echo '' 00:04:32.571 00:04:32.571 17:14:59 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:32.571 17:14:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:32.571 INFO: Checking if target configuration is the same... 00:04:32.571 17:14:59 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.571 17:14:59 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:32.571 17:14:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.571 + '[' 2 -ne 2 ']' 00:04:32.571 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:32.571 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:32.571 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:32.571 +++ basename /dev/fd/62 00:04:32.571 ++ mktemp /tmp/62.XXX 00:04:32.571 + tmp_file_1=/tmp/62.tw8 00:04:32.571 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.571 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:32.571 + tmp_file_2=/tmp/spdk_tgt_config.json.b0w 00:04:32.571 + ret=0 00:04:32.571 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:32.829 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.086 + diff -u /tmp/62.tw8 /tmp/spdk_tgt_config.json.b0w 00:04:33.086 + echo 'INFO: JSON config files are the same' 00:04:33.086 INFO: JSON config files are the same 00:04:33.087 + rm /tmp/62.tw8 /tmp/spdk_tgt_config.json.b0w 00:04:33.087 + exit 0 00:04:33.087 17:14:59 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:33.087 17:14:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:33.087 INFO: changing configuration and checking if this can be detected... 00:04:33.087 17:14:59 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:33.087 17:14:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:33.087 17:14:59 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.087 17:14:59 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:33.087 17:14:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.087 + '[' 2 -ne 2 ']' 00:04:33.087 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:33.344 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:33.344 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:33.345 +++ basename /dev/fd/62 00:04:33.345 ++ mktemp /tmp/62.XXX 00:04:33.345 + tmp_file_1=/tmp/62.OT1 00:04:33.345 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.345 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:33.345 + tmp_file_2=/tmp/spdk_tgt_config.json.0PV 00:04:33.345 + ret=0 00:04:33.345 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.604 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.604 + diff -u /tmp/62.OT1 /tmp/spdk_tgt_config.json.0PV 00:04:33.604 + ret=1 00:04:33.604 + echo '=== Start of file: /tmp/62.OT1 ===' 00:04:33.604 + cat /tmp/62.OT1 00:04:33.604 + echo '=== End of file: /tmp/62.OT1 ===' 00:04:33.604 + echo '' 00:04:33.604 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0PV ===' 00:04:33.604 + cat /tmp/spdk_tgt_config.json.0PV 00:04:33.604 + echo '=== End of file: /tmp/spdk_tgt_config.json.0PV ===' 00:04:33.604 + echo '' 00:04:33.604 + rm /tmp/62.OT1 /tmp/spdk_tgt_config.json.0PV 00:04:33.604 + exit 1 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:33.604 INFO: configuration change detected. 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:33.604 17:15:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.604 17:15:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@324 -- # [[ -n 1711453 ]] 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:33.604 17:15:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.604 17:15:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:33.604 17:15:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.604 17:15:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.604 17:15:00 json_config -- json_config/json_config.sh@330 -- # killprocess 1711453 00:04:33.604 17:15:00 json_config -- common/autotest_common.sh@954 -- # '[' -z 1711453 ']' 00:04:33.604 17:15:00 json_config -- common/autotest_common.sh@958 -- # kill -0 1711453 00:04:33.604 17:15:00 json_config -- common/autotest_common.sh@959 -- # uname 00:04:33.604 17:15:00 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.604 17:15:00 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1711453 00:04:33.604 17:15:00 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.862 17:15:00 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.862 17:15:00 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1711453' 00:04:33.862 killing process with pid 1711453 00:04:33.862 17:15:00 json_config -- common/autotest_common.sh@973 -- # kill 1711453 00:04:33.862 17:15:00 json_config -- common/autotest_common.sh@978 -- # wait 1711453 00:04:35.237 17:15:01 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.237 17:15:01 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:35.237 17:15:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.237 17:15:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.237 17:15:01 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:35.237 17:15:01 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:35.237 INFO: Success 00:04:35.237 00:04:35.237 real 0m15.842s 00:04:35.237 user 0m16.472s 00:04:35.237 sys 0m2.600s 00:04:35.237 17:15:01 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.237 17:15:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.237 ************************************ 00:04:35.237 END TEST json_config 00:04:35.237 ************************************ 00:04:35.237 17:15:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:35.237 17:15:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.237 17:15:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.237 17:15:01 -- common/autotest_common.sh@10 -- # set +x 00:04:35.237 ************************************ 00:04:35.237 START TEST json_config_extra_key 00:04:35.237 ************************************ 00:04:35.237 17:15:01 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:35.497 17:15:01 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.497 17:15:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.497 17:15:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.497 17:15:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:35.497 17:15:01 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.497 17:15:01 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.497 --rc genhtml_branch_coverage=1 00:04:35.497 --rc genhtml_function_coverage=1 00:04:35.497 --rc genhtml_legend=1 00:04:35.497 --rc geninfo_all_blocks=1 00:04:35.497 --rc geninfo_unexecuted_blocks=1 00:04:35.497 00:04:35.497 ' 00:04:35.497 17:15:01 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.497 --rc genhtml_branch_coverage=1 00:04:35.497 --rc genhtml_function_coverage=1 00:04:35.497 --rc genhtml_legend=1 00:04:35.497 --rc geninfo_all_blocks=1 00:04:35.497 --rc geninfo_unexecuted_blocks=1 00:04:35.497 00:04:35.497 ' 00:04:35.497 17:15:01 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.497 --rc genhtml_branch_coverage=1 00:04:35.497 --rc genhtml_function_coverage=1 00:04:35.497 --rc genhtml_legend=1 00:04:35.497 --rc geninfo_all_blocks=1 00:04:35.497 --rc geninfo_unexecuted_blocks=1 00:04:35.497 00:04:35.497 ' 00:04:35.497 17:15:01 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.497 --rc genhtml_branch_coverage=1 00:04:35.497 --rc genhtml_function_coverage=1 00:04:35.497 --rc genhtml_legend=1 00:04:35.497 --rc geninfo_all_blocks=1 00:04:35.497 --rc geninfo_unexecuted_blocks=1 00:04:35.497 00:04:35.497 ' 00:04:35.497 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.497 17:15:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.497 17:15:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.497 17:15:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.497 17:15:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.497 17:15:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:35.497 17:15:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:35.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:35.497 17:15:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:35.497 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:35.497 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:35.497 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:35.497 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:35.497 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:35.497 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:35.497 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:35.497 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:35.498 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:35.498 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:35.498 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:35.498 INFO: launching applications... 00:04:35.498 17:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:35.498 17:15:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:35.498 17:15:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:35.498 17:15:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:35.498 17:15:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:35.498 17:15:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:35.498 17:15:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.498 17:15:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.498 17:15:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1712829 00:04:35.498 17:15:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:35.498 Waiting for target to run... 00:04:35.498 17:15:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1712829 /var/tmp/spdk_tgt.sock 00:04:35.498 17:15:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1712829 ']' 00:04:35.498 17:15:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:35.498 17:15:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:35.498 17:15:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.498 17:15:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:35.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:35.498 17:15:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.498 17:15:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:35.498 [2024-12-09 17:15:01.956619] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:35.498 [2024-12-09 17:15:01.956670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712829 ] 00:04:36.065 [2024-12-09 17:15:02.411219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.065 [2024-12-09 17:15:02.463538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.324 17:15:02 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.324 17:15:02 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:36.324 17:15:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:36.324 00:04:36.324 17:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:36.324 INFO: shutting down applications... 00:04:36.324 17:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:36.324 17:15:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:36.324 17:15:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:36.324 17:15:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1712829 ]] 00:04:36.324 17:15:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1712829 00:04:36.324 17:15:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:36.324 17:15:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.324 17:15:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1712829 00:04:36.324 17:15:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:36.891 17:15:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:36.891 17:15:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.891 17:15:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1712829 00:04:36.891 17:15:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:36.891 17:15:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:36.891 17:15:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:36.891 17:15:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:36.891 SPDK target shutdown done 00:04:36.891 17:15:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:36.891 Success 00:04:36.892 00:04:36.892 real 0m1.596s 00:04:36.892 user 0m1.232s 00:04:36.892 sys 0m0.558s 00:04:36.892 17:15:03 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.892 17:15:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.892 ************************************ 00:04:36.892 END TEST json_config_extra_key 00:04:36.892 ************************************ 00:04:36.892 17:15:03 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:36.892 17:15:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.892 17:15:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.892 17:15:03 -- common/autotest_common.sh@10 -- # set +x 00:04:36.892 ************************************ 00:04:36.892 START TEST alias_rpc 00:04:36.892 ************************************ 00:04:36.892 17:15:03 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.151 * Looking for test storage... 00:04:37.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.151 17:15:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.151 --rc genhtml_branch_coverage=1 00:04:37.151 --rc genhtml_function_coverage=1 00:04:37.151 --rc genhtml_legend=1 00:04:37.151 --rc geninfo_all_blocks=1 00:04:37.151 --rc geninfo_unexecuted_blocks=1 00:04:37.151 00:04:37.151 ' 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.151 --rc genhtml_branch_coverage=1 00:04:37.151 --rc genhtml_function_coverage=1 00:04:37.151 --rc genhtml_legend=1 00:04:37.151 --rc geninfo_all_blocks=1 00:04:37.151 --rc geninfo_unexecuted_blocks=1 00:04:37.151 00:04:37.151 ' 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.151 --rc genhtml_branch_coverage=1 00:04:37.151 --rc genhtml_function_coverage=1 00:04:37.151 --rc genhtml_legend=1 00:04:37.151 --rc geninfo_all_blocks=1 00:04:37.151 --rc geninfo_unexecuted_blocks=1 00:04:37.151 00:04:37.151 ' 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.151 --rc genhtml_branch_coverage=1 00:04:37.151 --rc genhtml_function_coverage=1 00:04:37.151 --rc genhtml_legend=1 00:04:37.151 --rc geninfo_all_blocks=1 00:04:37.151 --rc geninfo_unexecuted_blocks=1 00:04:37.151 00:04:37.151 ' 00:04:37.151 17:15:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:37.151 17:15:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1713119 00:04:37.151 17:15:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1713119 00:04:37.151 17:15:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1713119 ']' 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.151 17:15:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.151 [2024-12-09 17:15:03.608233] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:37.151 [2024-12-09 17:15:03.608278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713119 ] 00:04:37.151 [2024-12-09 17:15:03.682824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.409 [2024-12-09 17:15:03.721985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.409 17:15:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.409 17:15:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:37.409 17:15:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:37.668 17:15:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1713119 00:04:37.668 17:15:04 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1713119 ']' 00:04:37.668 17:15:04 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1713119 00:04:37.668 17:15:04 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:37.668 17:15:04 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.668 17:15:04 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1713119 00:04:37.926 17:15:04 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.926 17:15:04 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.926 17:15:04 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1713119' 00:04:37.926 killing process with pid 1713119 00:04:37.926 17:15:04 alias_rpc -- common/autotest_common.sh@973 -- # kill 1713119 00:04:37.926 17:15:04 alias_rpc -- common/autotest_common.sh@978 -- # wait 1713119 00:04:38.185 00:04:38.185 real 0m1.134s 00:04:38.185 user 0m1.135s 00:04:38.185 sys 0m0.433s 00:04:38.185 17:15:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.185 17:15:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.185 ************************************ 00:04:38.185 END TEST alias_rpc 00:04:38.185 ************************************ 00:04:38.185 17:15:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:38.185 17:15:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:38.185 17:15:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.185 17:15:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.185 17:15:04 -- common/autotest_common.sh@10 -- # set +x 00:04:38.185 ************************************ 00:04:38.185 START TEST spdkcli_tcp 00:04:38.185 ************************************ 00:04:38.185 17:15:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:38.185 * Looking for test storage... 00:04:38.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:38.185 17:15:04 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.185 17:15:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.185 17:15:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.443 17:15:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.443 17:15:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:38.443 17:15:04 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.443 17:15:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.443 --rc genhtml_branch_coverage=1 00:04:38.443 --rc genhtml_function_coverage=1 00:04:38.443 --rc genhtml_legend=1 00:04:38.443 --rc geninfo_all_blocks=1 00:04:38.443 --rc geninfo_unexecuted_blocks=1 00:04:38.443 00:04:38.443 ' 00:04:38.443 17:15:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.443 --rc genhtml_branch_coverage=1 00:04:38.443 --rc genhtml_function_coverage=1 00:04:38.443 --rc genhtml_legend=1 00:04:38.443 --rc geninfo_all_blocks=1 00:04:38.443 --rc geninfo_unexecuted_blocks=1 00:04:38.443 00:04:38.443 ' 00:04:38.443 17:15:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.443 --rc genhtml_branch_coverage=1 00:04:38.443 --rc genhtml_function_coverage=1 00:04:38.443 --rc genhtml_legend=1 00:04:38.443 --rc geninfo_all_blocks=1 00:04:38.443 --rc geninfo_unexecuted_blocks=1 00:04:38.443 00:04:38.443 ' 00:04:38.443 17:15:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.443 --rc genhtml_branch_coverage=1 00:04:38.443 --rc genhtml_function_coverage=1 00:04:38.443 --rc genhtml_legend=1 00:04:38.443 --rc geninfo_all_blocks=1 00:04:38.443 --rc geninfo_unexecuted_blocks=1 00:04:38.443 00:04:38.443 ' 00:04:38.444 17:15:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:38.444 17:15:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:38.444 17:15:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:38.444 17:15:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:38.444 17:15:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:38.444 17:15:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:38.444 17:15:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:38.444 17:15:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.444 17:15:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.444 17:15:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1713403 00:04:38.444 17:15:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1713403 00:04:38.444 17:15:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:38.444 17:15:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1713403 ']' 00:04:38.444 17:15:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.444 17:15:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.444 17:15:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.444 17:15:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.444 17:15:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.444 [2024-12-09 17:15:04.817600] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:38.444 [2024-12-09 17:15:04.817648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713403 ] 00:04:38.444 [2024-12-09 17:15:04.892585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.444 [2024-12-09 17:15:04.934409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.444 [2024-12-09 17:15:04.934412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.702 17:15:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.702 17:15:05 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:38.702 17:15:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1713438 00:04:38.702 17:15:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:38.702 17:15:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:38.960 [ 00:04:38.960 "bdev_malloc_delete", 00:04:38.960 "bdev_malloc_create", 00:04:38.960 "bdev_null_resize", 00:04:38.960 "bdev_null_delete", 00:04:38.960 "bdev_null_create", 00:04:38.960 "bdev_nvme_cuse_unregister", 00:04:38.960 "bdev_nvme_cuse_register", 00:04:38.960 "bdev_opal_new_user", 00:04:38.960 "bdev_opal_set_lock_state", 00:04:38.960 "bdev_opal_delete", 00:04:38.960 "bdev_opal_get_info", 00:04:38.960 "bdev_opal_create", 00:04:38.960 "bdev_nvme_opal_revert", 00:04:38.960 "bdev_nvme_opal_init", 00:04:38.960 "bdev_nvme_send_cmd", 00:04:38.960 "bdev_nvme_set_keys", 00:04:38.960 "bdev_nvme_get_path_iostat", 00:04:38.960 "bdev_nvme_get_mdns_discovery_info", 00:04:38.960 "bdev_nvme_stop_mdns_discovery", 00:04:38.960 "bdev_nvme_start_mdns_discovery", 00:04:38.960 "bdev_nvme_set_multipath_policy", 00:04:38.960 "bdev_nvme_set_preferred_path", 00:04:38.960 "bdev_nvme_get_io_paths", 00:04:38.960 "bdev_nvme_remove_error_injection", 00:04:38.960 "bdev_nvme_add_error_injection", 00:04:38.960 "bdev_nvme_get_discovery_info", 00:04:38.960 "bdev_nvme_stop_discovery", 00:04:38.960 "bdev_nvme_start_discovery", 00:04:38.960 "bdev_nvme_get_controller_health_info", 00:04:38.960 "bdev_nvme_disable_controller", 00:04:38.960 "bdev_nvme_enable_controller", 00:04:38.960 "bdev_nvme_reset_controller", 00:04:38.960 "bdev_nvme_get_transport_statistics", 00:04:38.960 "bdev_nvme_apply_firmware", 00:04:38.960 "bdev_nvme_detach_controller", 00:04:38.960 "bdev_nvme_get_controllers", 00:04:38.960 "bdev_nvme_attach_controller", 00:04:38.960 "bdev_nvme_set_hotplug", 00:04:38.960 "bdev_nvme_set_options", 00:04:38.960 "bdev_passthru_delete", 00:04:38.960 "bdev_passthru_create", 00:04:38.960 "bdev_lvol_set_parent_bdev", 00:04:38.960 "bdev_lvol_set_parent", 00:04:38.960 "bdev_lvol_check_shallow_copy", 00:04:38.960 "bdev_lvol_start_shallow_copy", 00:04:38.960 "bdev_lvol_grow_lvstore", 00:04:38.960 "bdev_lvol_get_lvols", 00:04:38.960 "bdev_lvol_get_lvstores", 00:04:38.960 "bdev_lvol_delete", 00:04:38.960 "bdev_lvol_set_read_only", 00:04:38.960 "bdev_lvol_resize", 00:04:38.960 "bdev_lvol_decouple_parent", 00:04:38.960 "bdev_lvol_inflate", 00:04:38.960 "bdev_lvol_rename", 00:04:38.960 "bdev_lvol_clone_bdev", 00:04:38.960 "bdev_lvol_clone", 00:04:38.960 "bdev_lvol_snapshot", 00:04:38.960 "bdev_lvol_create", 00:04:38.960 "bdev_lvol_delete_lvstore", 00:04:38.960 "bdev_lvol_rename_lvstore", 00:04:38.960 "bdev_lvol_create_lvstore", 00:04:38.960 "bdev_raid_set_options", 00:04:38.960 "bdev_raid_remove_base_bdev", 00:04:38.960 "bdev_raid_add_base_bdev", 00:04:38.960 "bdev_raid_delete", 00:04:38.960 "bdev_raid_create", 00:04:38.960 "bdev_raid_get_bdevs", 00:04:38.960 "bdev_error_inject_error", 00:04:38.960 "bdev_error_delete", 00:04:38.960 "bdev_error_create", 00:04:38.960 "bdev_split_delete", 00:04:38.960 "bdev_split_create", 00:04:38.960 "bdev_delay_delete", 00:04:38.960 "bdev_delay_create", 00:04:38.960 "bdev_delay_update_latency", 00:04:38.960 "bdev_zone_block_delete", 00:04:38.960 "bdev_zone_block_create", 00:04:38.960 "blobfs_create", 00:04:38.960 "blobfs_detect", 00:04:38.960 "blobfs_set_cache_size", 00:04:38.960 "bdev_aio_delete", 00:04:38.960 "bdev_aio_rescan", 00:04:38.960 "bdev_aio_create", 00:04:38.960 "bdev_ftl_set_property", 00:04:38.960 "bdev_ftl_get_properties", 00:04:38.960 "bdev_ftl_get_stats", 00:04:38.960 "bdev_ftl_unmap", 00:04:38.960 "bdev_ftl_unload", 00:04:38.960 "bdev_ftl_delete", 00:04:38.960 "bdev_ftl_load", 00:04:38.960 "bdev_ftl_create", 00:04:38.960 "bdev_virtio_attach_controller", 00:04:38.960 "bdev_virtio_scsi_get_devices", 00:04:38.960 "bdev_virtio_detach_controller", 00:04:38.960 "bdev_virtio_blk_set_hotplug", 00:04:38.960 "bdev_iscsi_delete", 00:04:38.960 "bdev_iscsi_create", 00:04:38.960 "bdev_iscsi_set_options", 00:04:38.960 "accel_error_inject_error", 00:04:38.960 "ioat_scan_accel_module", 00:04:38.960 "dsa_scan_accel_module", 00:04:38.960 "iaa_scan_accel_module", 00:04:38.960 "vfu_virtio_create_fs_endpoint", 00:04:38.960 "vfu_virtio_create_scsi_endpoint", 00:04:38.960 "vfu_virtio_scsi_remove_target", 00:04:38.960 "vfu_virtio_scsi_add_target", 00:04:38.960 "vfu_virtio_create_blk_endpoint", 00:04:38.960 "vfu_virtio_delete_endpoint", 00:04:38.960 "keyring_file_remove_key", 00:04:38.960 "keyring_file_add_key", 00:04:38.960 "keyring_linux_set_options", 00:04:38.960 "fsdev_aio_delete", 00:04:38.960 "fsdev_aio_create", 00:04:38.960 "iscsi_get_histogram", 00:04:38.960 "iscsi_enable_histogram", 00:04:38.960 "iscsi_set_options", 00:04:38.960 "iscsi_get_auth_groups", 00:04:38.960 "iscsi_auth_group_remove_secret", 00:04:38.960 "iscsi_auth_group_add_secret", 00:04:38.960 "iscsi_delete_auth_group", 00:04:38.960 "iscsi_create_auth_group", 00:04:38.960 "iscsi_set_discovery_auth", 00:04:38.960 "iscsi_get_options", 00:04:38.960 "iscsi_target_node_request_logout", 00:04:38.960 "iscsi_target_node_set_redirect", 00:04:38.960 "iscsi_target_node_set_auth", 00:04:38.960 "iscsi_target_node_add_lun", 00:04:38.960 "iscsi_get_stats", 00:04:38.960 "iscsi_get_connections", 00:04:38.960 "iscsi_portal_group_set_auth", 00:04:38.960 "iscsi_start_portal_group", 00:04:38.960 "iscsi_delete_portal_group", 00:04:38.960 "iscsi_create_portal_group", 00:04:38.960 "iscsi_get_portal_groups", 00:04:38.960 "iscsi_delete_target_node", 00:04:38.960 "iscsi_target_node_remove_pg_ig_maps", 00:04:38.960 "iscsi_target_node_add_pg_ig_maps", 00:04:38.960 "iscsi_create_target_node", 00:04:38.960 "iscsi_get_target_nodes", 00:04:38.960 "iscsi_delete_initiator_group", 00:04:38.960 "iscsi_initiator_group_remove_initiators", 00:04:38.960 "iscsi_initiator_group_add_initiators", 00:04:38.960 "iscsi_create_initiator_group", 00:04:38.960 "iscsi_get_initiator_groups", 00:04:38.960 "nvmf_set_crdt", 00:04:38.960 "nvmf_set_config", 00:04:38.960 "nvmf_set_max_subsystems", 00:04:38.960 "nvmf_stop_mdns_prr", 00:04:38.960 "nvmf_publish_mdns_prr", 00:04:38.960 "nvmf_subsystem_get_listeners", 00:04:38.960 "nvmf_subsystem_get_qpairs", 00:04:38.960 "nvmf_subsystem_get_controllers", 00:04:38.960 "nvmf_get_stats", 00:04:38.960 "nvmf_get_transports", 00:04:38.960 "nvmf_create_transport", 00:04:38.960 "nvmf_get_targets", 00:04:38.960 "nvmf_delete_target", 00:04:38.960 "nvmf_create_target", 00:04:38.960 "nvmf_subsystem_allow_any_host", 00:04:38.960 "nvmf_subsystem_set_keys", 00:04:38.960 "nvmf_subsystem_remove_host", 00:04:38.960 "nvmf_subsystem_add_host", 00:04:38.960 "nvmf_ns_remove_host", 00:04:38.960 "nvmf_ns_add_host", 00:04:38.960 "nvmf_subsystem_remove_ns", 00:04:38.960 "nvmf_subsystem_set_ns_ana_group", 00:04:38.960 "nvmf_subsystem_add_ns", 00:04:38.960 "nvmf_subsystem_listener_set_ana_state", 00:04:38.960 "nvmf_discovery_get_referrals", 00:04:38.961 "nvmf_discovery_remove_referral", 00:04:38.961 "nvmf_discovery_add_referral", 00:04:38.961 "nvmf_subsystem_remove_listener", 00:04:38.961 "nvmf_subsystem_add_listener", 00:04:38.961 "nvmf_delete_subsystem", 00:04:38.961 "nvmf_create_subsystem", 00:04:38.961 "nvmf_get_subsystems", 00:04:38.961 "env_dpdk_get_mem_stats", 00:04:38.961 "nbd_get_disks", 00:04:38.961 "nbd_stop_disk", 00:04:38.961 "nbd_start_disk", 00:04:38.961 "ublk_recover_disk", 00:04:38.961 "ublk_get_disks", 00:04:38.961 "ublk_stop_disk", 00:04:38.961 "ublk_start_disk", 00:04:38.961 "ublk_destroy_target", 00:04:38.961 "ublk_create_target", 00:04:38.961 "virtio_blk_create_transport", 00:04:38.961 "virtio_blk_get_transports", 00:04:38.961 "vhost_controller_set_coalescing", 00:04:38.961 "vhost_get_controllers", 00:04:38.961 "vhost_delete_controller", 00:04:38.961 "vhost_create_blk_controller", 00:04:38.961 "vhost_scsi_controller_remove_target", 00:04:38.961 "vhost_scsi_controller_add_target", 00:04:38.961 "vhost_start_scsi_controller", 00:04:38.961 "vhost_create_scsi_controller", 00:04:38.961 "thread_set_cpumask", 00:04:38.961 "scheduler_set_options", 00:04:38.961 "framework_get_governor", 00:04:38.961 "framework_get_scheduler", 00:04:38.961 "framework_set_scheduler", 00:04:38.961 "framework_get_reactors", 00:04:38.961 "thread_get_io_channels", 00:04:38.961 "thread_get_pollers", 00:04:38.961 "thread_get_stats", 00:04:38.961 "framework_monitor_context_switch", 00:04:38.961 "spdk_kill_instance", 00:04:38.961 "log_enable_timestamps", 00:04:38.961 "log_get_flags", 00:04:38.961 "log_clear_flag", 00:04:38.961 "log_set_flag", 00:04:38.961 "log_get_level", 00:04:38.961 "log_set_level", 00:04:38.961 "log_get_print_level", 00:04:38.961 "log_set_print_level", 00:04:38.961 "framework_enable_cpumask_locks", 00:04:38.961 "framework_disable_cpumask_locks", 00:04:38.961 "framework_wait_init", 00:04:38.961 "framework_start_init", 00:04:38.961 "scsi_get_devices", 00:04:38.961 "bdev_get_histogram", 00:04:38.961 "bdev_enable_histogram", 00:04:38.961 "bdev_set_qos_limit", 00:04:38.961 "bdev_set_qd_sampling_period", 00:04:38.961 "bdev_get_bdevs", 00:04:38.961 "bdev_reset_iostat", 00:04:38.961 "bdev_get_iostat", 00:04:38.961 "bdev_examine", 00:04:38.961 "bdev_wait_for_examine", 00:04:38.961 "bdev_set_options", 00:04:38.961 "accel_get_stats", 00:04:38.961 "accel_set_options", 00:04:38.961 "accel_set_driver", 00:04:38.961 "accel_crypto_key_destroy", 00:04:38.961 "accel_crypto_keys_get", 00:04:38.961 "accel_crypto_key_create", 00:04:38.961 "accel_assign_opc", 00:04:38.961 "accel_get_module_info", 00:04:38.961 "accel_get_opc_assignments", 00:04:38.961 "vmd_rescan", 00:04:38.961 "vmd_remove_device", 00:04:38.961 "vmd_enable", 00:04:38.961 "sock_get_default_impl", 00:04:38.961 "sock_set_default_impl", 00:04:38.961 "sock_impl_set_options", 00:04:38.961 "sock_impl_get_options", 00:04:38.961 "iobuf_get_stats", 00:04:38.961 "iobuf_set_options", 00:04:38.961 "keyring_get_keys", 00:04:38.961 "vfu_tgt_set_base_path", 00:04:38.961 "framework_get_pci_devices", 00:04:38.961 "framework_get_config", 00:04:38.961 "framework_get_subsystems", 00:04:38.961 "fsdev_set_opts", 00:04:38.961 "fsdev_get_opts", 00:04:38.961 "trace_get_info", 00:04:38.961 "trace_get_tpoint_group_mask", 00:04:38.961 "trace_disable_tpoint_group", 00:04:38.961 "trace_enable_tpoint_group", 00:04:38.961 "trace_clear_tpoint_mask", 00:04:38.961 "trace_set_tpoint_mask", 00:04:38.961 "notify_get_notifications", 00:04:38.961 "notify_get_types", 00:04:38.961 "spdk_get_version", 00:04:38.961 "rpc_get_methods" 00:04:38.961 ] 00:04:38.961 17:15:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:38.961 17:15:05 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.961 17:15:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.961 17:15:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:38.961 17:15:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1713403 00:04:38.961 17:15:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1713403 ']' 00:04:38.961 17:15:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1713403 00:04:38.961 17:15:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:38.961 17:15:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.961 17:15:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1713403 00:04:38.961 17:15:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.961 17:15:05 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.961 17:15:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1713403' 00:04:38.961 killing process with pid 1713403 00:04:38.961 17:15:05 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1713403 00:04:38.961 17:15:05 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1713403 00:04:39.219 00:04:39.219 real 0m1.160s 00:04:39.219 user 0m1.951s 00:04:39.219 sys 0m0.457s 00:04:39.219 17:15:05 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.219 17:15:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.219 ************************************ 00:04:39.219 END TEST spdkcli_tcp 00:04:39.219 ************************************ 00:04:39.500 17:15:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.500 17:15:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.500 17:15:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.500 17:15:05 -- common/autotest_common.sh@10 -- # set +x 00:04:39.500 ************************************ 00:04:39.500 START TEST dpdk_mem_utility 00:04:39.500 ************************************ 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.500 * Looking for test storage... 00:04:39.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.500 17:15:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:39.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.500 --rc genhtml_branch_coverage=1 00:04:39.500 --rc genhtml_function_coverage=1 00:04:39.500 --rc genhtml_legend=1 00:04:39.500 --rc geninfo_all_blocks=1 00:04:39.500 --rc geninfo_unexecuted_blocks=1 00:04:39.500 00:04:39.500 ' 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:39.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.500 --rc genhtml_branch_coverage=1 00:04:39.500 --rc genhtml_function_coverage=1 00:04:39.500 --rc genhtml_legend=1 00:04:39.500 --rc geninfo_all_blocks=1 00:04:39.500 --rc geninfo_unexecuted_blocks=1 00:04:39.500 00:04:39.500 ' 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:39.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.500 --rc genhtml_branch_coverage=1 00:04:39.500 --rc genhtml_function_coverage=1 00:04:39.500 --rc genhtml_legend=1 00:04:39.500 --rc geninfo_all_blocks=1 00:04:39.500 --rc geninfo_unexecuted_blocks=1 00:04:39.500 00:04:39.500 ' 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:39.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.500 --rc genhtml_branch_coverage=1 00:04:39.500 --rc genhtml_function_coverage=1 00:04:39.500 --rc genhtml_legend=1 00:04:39.500 --rc geninfo_all_blocks=1 00:04:39.500 --rc geninfo_unexecuted_blocks=1 00:04:39.500 00:04:39.500 ' 00:04:39.500 17:15:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:39.500 17:15:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1713735 00:04:39.500 17:15:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1713735 00:04:39.500 17:15:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1713735 ']' 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.500 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.501 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.501 17:15:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.811 [2024-12-09 17:15:06.041690] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:39.811 [2024-12-09 17:15:06.041739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713735 ] 00:04:39.811 [2024-12-09 17:15:06.117189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.811 [2024-12-09 17:15:06.157837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.071 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.071 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:40.071 17:15:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:40.071 17:15:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:40.071 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.071 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.071 { 00:04:40.071 "filename": "/tmp/spdk_mem_dump.txt" 00:04:40.071 } 00:04:40.071 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.071 17:15:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:40.071 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:40.071 1 heaps totaling size 818.000000 MiB 00:04:40.071 size: 818.000000 MiB heap id: 0 00:04:40.071 end heaps---------- 00:04:40.071 9 mempools totaling size 603.782043 MiB 00:04:40.071 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:40.071 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:40.072 size: 100.555481 MiB name: bdev_io_1713735 00:04:40.072 size: 50.003479 MiB name: msgpool_1713735 00:04:40.072 size: 36.509338 MiB name: fsdev_io_1713735 00:04:40.072 size: 21.763794 MiB name: PDU_Pool 00:04:40.072 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:40.072 size: 4.133484 MiB name: evtpool_1713735 00:04:40.072 size: 0.026123 MiB name: Session_Pool 00:04:40.072 end mempools------- 00:04:40.072 6 memzones totaling size 4.142822 MiB 00:04:40.072 size: 1.000366 MiB name: RG_ring_0_1713735 00:04:40.072 size: 1.000366 MiB name: RG_ring_1_1713735 00:04:40.072 size: 1.000366 MiB name: RG_ring_4_1713735 00:04:40.072 size: 1.000366 MiB name: RG_ring_5_1713735 00:04:40.072 size: 0.125366 MiB name: RG_ring_2_1713735 00:04:40.072 size: 0.015991 MiB name: RG_ring_3_1713735 00:04:40.072 end memzones------- 00:04:40.072 17:15:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:40.072 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:40.072 list of free elements. size: 10.852478 MiB 00:04:40.072 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:40.072 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:40.072 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:40.072 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:40.072 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:40.072 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:40.072 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:40.072 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:40.072 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:40.072 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:40.072 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:40.072 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:40.072 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:40.072 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:40.072 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:40.072 list of standard malloc elements. size: 199.218628 MiB 00:04:40.072 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:40.072 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:40.072 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:40.072 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:40.072 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:40.072 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:40.072 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:40.072 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:40.072 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:40.072 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:40.072 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:40.072 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:40.072 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:40.072 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:40.072 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:40.072 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:40.072 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:40.072 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:40.072 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:40.072 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:40.072 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:40.072 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:40.072 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:40.072 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:40.072 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:40.072 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:40.072 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:40.072 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:40.072 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:40.072 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:40.072 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:40.072 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:40.072 list of memzone associated elements. size: 607.928894 MiB 00:04:40.072 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:40.072 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:40.072 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:40.072 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:40.072 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:40.072 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1713735_0 00:04:40.072 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:40.072 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1713735_0 00:04:40.072 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:40.072 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1713735_0 00:04:40.072 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:40.072 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:40.072 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:40.072 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:40.072 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:40.072 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1713735_0 00:04:40.072 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:40.072 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1713735 00:04:40.072 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:40.072 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1713735 00:04:40.072 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:40.072 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:40.072 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:40.072 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:40.072 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:40.072 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:40.072 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:40.072 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:40.072 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:40.072 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1713735 00:04:40.072 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:40.072 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1713735 00:04:40.072 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:40.072 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1713735 00:04:40.072 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:40.072 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1713735 00:04:40.072 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:40.072 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1713735 00:04:40.072 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:40.072 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1713735 00:04:40.072 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:40.072 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:40.072 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:40.072 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:40.072 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:40.072 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:40.072 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:40.072 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1713735 00:04:40.072 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:40.072 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1713735 00:04:40.072 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:40.072 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:40.072 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:40.072 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:40.072 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:40.072 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1713735 00:04:40.072 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:40.072 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:40.072 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:40.072 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1713735 00:04:40.072 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:40.072 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1713735 00:04:40.072 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:40.072 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1713735 00:04:40.072 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:40.072 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:40.072 17:15:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:40.072 17:15:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1713735 00:04:40.072 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1713735 ']' 00:04:40.073 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1713735 00:04:40.073 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:40.073 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.073 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1713735 00:04:40.073 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.073 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.073 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1713735' 00:04:40.073 killing process with pid 1713735 00:04:40.073 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1713735 00:04:40.073 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1713735 00:04:40.331 00:04:40.331 real 0m1.024s 00:04:40.331 user 0m0.950s 00:04:40.331 sys 0m0.413s 00:04:40.331 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.331 17:15:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.331 ************************************ 00:04:40.331 END TEST dpdk_mem_utility 00:04:40.331 ************************************ 00:04:40.590 17:15:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:40.590 17:15:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.590 17:15:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.590 17:15:06 -- common/autotest_common.sh@10 -- # set +x 00:04:40.590 ************************************ 00:04:40.590 START TEST event 00:04:40.590 ************************************ 00:04:40.590 17:15:06 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:40.590 * Looking for test storage... 00:04:40.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:40.590 17:15:06 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.590 17:15:06 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.590 17:15:06 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.590 17:15:07 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.590 17:15:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.590 17:15:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.590 17:15:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.590 17:15:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.590 17:15:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.590 17:15:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.590 17:15:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.590 17:15:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.590 17:15:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.590 17:15:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.590 17:15:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.590 17:15:07 event -- scripts/common.sh@344 -- # case "$op" in 00:04:40.590 17:15:07 event -- scripts/common.sh@345 -- # : 1 00:04:40.590 17:15:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.590 17:15:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.590 17:15:07 event -- scripts/common.sh@365 -- # decimal 1 00:04:40.590 17:15:07 event -- scripts/common.sh@353 -- # local d=1 00:04:40.590 17:15:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.590 17:15:07 event -- scripts/common.sh@355 -- # echo 1 00:04:40.590 17:15:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.590 17:15:07 event -- scripts/common.sh@366 -- # decimal 2 00:04:40.590 17:15:07 event -- scripts/common.sh@353 -- # local d=2 00:04:40.590 17:15:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.590 17:15:07 event -- scripts/common.sh@355 -- # echo 2 00:04:40.590 17:15:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.590 17:15:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.590 17:15:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.590 17:15:07 event -- scripts/common.sh@368 -- # return 0 00:04:40.590 17:15:07 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.590 17:15:07 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.590 --rc genhtml_branch_coverage=1 00:04:40.590 --rc genhtml_function_coverage=1 00:04:40.590 --rc genhtml_legend=1 00:04:40.590 --rc geninfo_all_blocks=1 00:04:40.591 --rc geninfo_unexecuted_blocks=1 00:04:40.591 00:04:40.591 ' 00:04:40.591 17:15:07 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.591 --rc genhtml_branch_coverage=1 00:04:40.591 --rc genhtml_function_coverage=1 00:04:40.591 --rc genhtml_legend=1 00:04:40.591 --rc geninfo_all_blocks=1 00:04:40.591 --rc geninfo_unexecuted_blocks=1 00:04:40.591 00:04:40.591 ' 00:04:40.591 17:15:07 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.591 --rc genhtml_branch_coverage=1 00:04:40.591 --rc genhtml_function_coverage=1 00:04:40.591 --rc genhtml_legend=1 00:04:40.591 --rc geninfo_all_blocks=1 00:04:40.591 --rc geninfo_unexecuted_blocks=1 00:04:40.591 00:04:40.591 ' 00:04:40.591 17:15:07 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.591 --rc genhtml_branch_coverage=1 00:04:40.591 --rc genhtml_function_coverage=1 00:04:40.591 --rc genhtml_legend=1 00:04:40.591 --rc geninfo_all_blocks=1 00:04:40.591 --rc geninfo_unexecuted_blocks=1 00:04:40.591 00:04:40.591 ' 00:04:40.591 17:15:07 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:40.591 17:15:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:40.591 17:15:07 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.591 17:15:07 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:40.591 17:15:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.591 17:15:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.591 ************************************ 00:04:40.591 START TEST event_perf 00:04:40.591 ************************************ 00:04:40.591 17:15:07 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.849 Running I/O for 1 seconds...[2024-12-09 17:15:07.135047] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:40.849 [2024-12-09 17:15:07.135118] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714287 ] 00:04:40.849 [2024-12-09 17:15:07.212146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:40.849 [2024-12-09 17:15:07.254927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.849 [2024-12-09 17:15:07.255024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.849 [2024-12-09 17:15:07.255130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.849 [2024-12-09 17:15:07.255131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:41.784 Running I/O for 1 seconds... 00:04:41.784 lcore 0: 203146 00:04:41.784 lcore 1: 203147 00:04:41.784 lcore 2: 203147 00:04:41.784 lcore 3: 203146 00:04:41.784 done. 00:04:41.784 00:04:41.784 real 0m1.179s 00:04:41.784 user 0m4.093s 00:04:41.784 sys 0m0.080s 00:04:41.784 17:15:08 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.784 17:15:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:41.784 ************************************ 00:04:41.784 END TEST event_perf 00:04:41.784 ************************************ 00:04:42.042 17:15:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:42.042 17:15:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:42.042 17:15:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.042 17:15:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.042 ************************************ 00:04:42.042 START TEST event_reactor 00:04:42.042 ************************************ 00:04:42.042 17:15:08 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:42.042 [2024-12-09 17:15:08.376728] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:42.042 [2024-12-09 17:15:08.376795] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714626 ] 00:04:42.042 [2024-12-09 17:15:08.455954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.042 [2024-12-09 17:15:08.493168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.427 test_start 00:04:43.427 oneshot 00:04:43.427 tick 100 00:04:43.427 tick 100 00:04:43.427 tick 250 00:04:43.427 tick 100 00:04:43.427 tick 100 00:04:43.427 tick 100 00:04:43.427 tick 250 00:04:43.427 tick 500 00:04:43.427 tick 100 00:04:43.427 tick 100 00:04:43.427 tick 250 00:04:43.427 tick 100 00:04:43.427 tick 100 00:04:43.427 test_end 00:04:43.427 00:04:43.427 real 0m1.173s 00:04:43.427 user 0m1.093s 00:04:43.427 sys 0m0.076s 00:04:43.427 17:15:09 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.427 17:15:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:43.427 ************************************ 00:04:43.427 END TEST event_reactor 00:04:43.427 ************************************ 00:04:43.427 17:15:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.427 17:15:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:43.427 17:15:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.427 17:15:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.427 ************************************ 00:04:43.427 START TEST event_reactor_perf 00:04:43.427 ************************************ 00:04:43.427 17:15:09 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.427 [2024-12-09 17:15:09.617963] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:43.427 [2024-12-09 17:15:09.618027] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714872 ] 00:04:43.427 [2024-12-09 17:15:09.695981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.427 [2024-12-09 17:15:09.734364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.362 test_start 00:04:44.362 test_end 00:04:44.362 Performance: 496940 events per second 00:04:44.362 00:04:44.362 real 0m1.177s 00:04:44.362 user 0m1.101s 00:04:44.362 sys 0m0.072s 00:04:44.362 17:15:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.362 17:15:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.362 ************************************ 00:04:44.362 END TEST event_reactor_perf 00:04:44.362 ************************************ 00:04:44.362 17:15:10 event -- event/event.sh@49 -- # uname -s 00:04:44.362 17:15:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:44.362 17:15:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:44.362 17:15:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.362 17:15:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.362 17:15:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.362 ************************************ 00:04:44.362 START TEST event_scheduler 00:04:44.362 ************************************ 00:04:44.362 17:15:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:44.622 * Looking for test storage... 00:04:44.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:44.622 17:15:10 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.622 17:15:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.622 17:15:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.622 17:15:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.622 17:15:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:44.622 17:15:11 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.622 17:15:11 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.622 --rc genhtml_branch_coverage=1 00:04:44.622 --rc genhtml_function_coverage=1 00:04:44.622 --rc genhtml_legend=1 00:04:44.622 --rc geninfo_all_blocks=1 00:04:44.622 --rc geninfo_unexecuted_blocks=1 00:04:44.622 00:04:44.622 ' 00:04:44.622 17:15:11 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.622 --rc genhtml_branch_coverage=1 00:04:44.622 --rc genhtml_function_coverage=1 00:04:44.622 --rc genhtml_legend=1 00:04:44.622 --rc geninfo_all_blocks=1 00:04:44.622 --rc geninfo_unexecuted_blocks=1 00:04:44.622 00:04:44.622 ' 00:04:44.622 17:15:11 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.622 --rc genhtml_branch_coverage=1 00:04:44.622 --rc genhtml_function_coverage=1 00:04:44.622 --rc genhtml_legend=1 00:04:44.622 --rc geninfo_all_blocks=1 00:04:44.622 --rc geninfo_unexecuted_blocks=1 00:04:44.622 00:04:44.622 ' 00:04:44.622 17:15:11 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.622 --rc genhtml_branch_coverage=1 00:04:44.622 --rc genhtml_function_coverage=1 00:04:44.622 --rc genhtml_legend=1 00:04:44.622 --rc geninfo_all_blocks=1 00:04:44.622 --rc geninfo_unexecuted_blocks=1 00:04:44.622 00:04:44.622 ' 00:04:44.622 17:15:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:44.622 17:15:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1715148 00:04:44.622 17:15:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:44.622 17:15:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.622 17:15:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1715148 00:04:44.622 17:15:11 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1715148 ']' 00:04:44.622 17:15:11 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.622 17:15:11 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.622 17:15:11 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.622 17:15:11 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.622 17:15:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.622 [2024-12-09 17:15:11.070552] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:44.622 [2024-12-09 17:15:11.070605] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715148 ] 00:04:44.622 [2024-12-09 17:15:11.146432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:44.882 [2024-12-09 17:15:11.190404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.882 [2024-12-09 17:15:11.190514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.882 [2024-12-09 17:15:11.190599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.882 [2024-12-09 17:15:11.190600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.882 17:15:11 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.882 17:15:11 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:44.882 17:15:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:44.882 17:15:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.882 17:15:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.882 [2024-12-09 17:15:11.239192] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:44.882 [2024-12-09 17:15:11.239212] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:44.882 [2024-12-09 17:15:11.239222] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:44.882 [2024-12-09 17:15:11.239228] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:44.882 [2024-12-09 17:15:11.239234] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:44.882 17:15:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.882 17:15:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:44.882 17:15:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.882 17:15:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.882 [2024-12-09 17:15:11.313595] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:44.882 17:15:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.882 17:15:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:44.882 17:15:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.882 17:15:11 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.882 17:15:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.882 ************************************ 00:04:44.882 START TEST scheduler_create_thread 00:04:44.882 ************************************ 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.882 2 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.882 3 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.882 4 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.882 5 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.882 6 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.882 7 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.882 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.141 8 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.141 9 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.141 10 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.141 17:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.079 17:15:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.079 17:15:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:46.079 17:15:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.079 17:15:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.456 17:15:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.456 17:15:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:47.456 17:15:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:47.456 17:15:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.456 17:15:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.390 17:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.390 00:04:48.390 real 0m3.381s 00:04:48.390 user 0m0.024s 00:04:48.390 sys 0m0.005s 00:04:48.390 17:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.390 17:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.390 ************************************ 00:04:48.390 END TEST scheduler_create_thread 00:04:48.390 ************************************ 00:04:48.390 17:15:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:48.390 17:15:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1715148 00:04:48.390 17:15:14 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1715148 ']' 00:04:48.390 17:15:14 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1715148 00:04:48.390 17:15:14 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:48.390 17:15:14 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.390 17:15:14 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1715148 00:04:48.390 17:15:14 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:48.390 17:15:14 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:48.390 17:15:14 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1715148' 00:04:48.390 killing process with pid 1715148 00:04:48.390 17:15:14 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1715148 00:04:48.390 17:15:14 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1715148 00:04:48.648 [2024-12-09 17:15:15.109454] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:48.906 00:04:48.906 real 0m4.464s 00:04:48.906 user 0m7.821s 00:04:48.906 sys 0m0.384s 00:04:48.906 17:15:15 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.906 17:15:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.906 ************************************ 00:04:48.906 END TEST event_scheduler 00:04:48.906 ************************************ 00:04:48.906 17:15:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:48.906 17:15:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:48.906 17:15:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.906 17:15:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.906 17:15:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.906 ************************************ 00:04:48.906 START TEST app_repeat 00:04:48.906 ************************************ 00:04:48.906 17:15:15 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1715875 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1715875' 00:04:48.906 Process app_repeat pid: 1715875 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:48.906 spdk_app_start Round 0 00:04:48.906 17:15:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1715875 /var/tmp/spdk-nbd.sock 00:04:48.906 17:15:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1715875 ']' 00:04:48.906 17:15:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.906 17:15:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.906 17:15:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.906 17:15:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.906 17:15:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:48.906 [2024-12-09 17:15:15.414857] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:04:48.906 [2024-12-09 17:15:15.414904] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715875 ] 00:04:49.164 [2024-12-09 17:15:15.487361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.164 [2024-12-09 17:15:15.541894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.164 [2024-12-09 17:15:15.541897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.164 17:15:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.164 17:15:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:49.164 17:15:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.422 Malloc0 00:04:49.422 17:15:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.680 Malloc1 00:04:49.680 17:15:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.680 17:15:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:49.938 /dev/nbd0 00:04:49.938 17:15:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:49.938 17:15:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:49.938 17:15:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:49.938 17:15:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:49.938 17:15:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:49.938 17:15:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:49.938 17:15:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:49.938 17:15:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:49.938 17:15:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:49.938 17:15:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:49.938 17:15:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.938 1+0 records in 00:04:49.938 1+0 records out 00:04:49.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223433 s, 18.3 MB/s 00:04:49.938 17:15:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.939 17:15:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:49.939 17:15:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.939 17:15:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:49.939 17:15:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:49.939 17:15:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.939 17:15:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.939 17:15:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.197 /dev/nbd1 00:04:50.197 17:15:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.197 17:15:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.197 1+0 records in 00:04:50.197 1+0 records out 00:04:50.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195615 s, 20.9 MB/s 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:50.197 17:15:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:50.197 17:15:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.197 17:15:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.197 17:15:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.197 17:15:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.197 17:15:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:50.455 { 00:04:50.455 "nbd_device": "/dev/nbd0", 00:04:50.455 "bdev_name": "Malloc0" 00:04:50.455 }, 00:04:50.455 { 00:04:50.455 "nbd_device": "/dev/nbd1", 00:04:50.455 "bdev_name": "Malloc1" 00:04:50.455 } 00:04:50.455 ]' 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:50.455 { 00:04:50.455 "nbd_device": "/dev/nbd0", 00:04:50.455 "bdev_name": "Malloc0" 00:04:50.455 }, 00:04:50.455 { 00:04:50.455 "nbd_device": "/dev/nbd1", 00:04:50.455 "bdev_name": "Malloc1" 00:04:50.455 } 00:04:50.455 ]' 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:50.455 /dev/nbd1' 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:50.455 /dev/nbd1' 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:50.455 256+0 records in 00:04:50.455 256+0 records out 00:04:50.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106633 s, 98.3 MB/s 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:50.455 256+0 records in 00:04:50.455 256+0 records out 00:04:50.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134738 s, 77.8 MB/s 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:50.455 256+0 records in 00:04:50.455 256+0 records out 00:04:50.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147899 s, 70.9 MB/s 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.455 17:15:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:50.713 17:15:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:50.713 17:15:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:50.713 17:15:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:50.713 17:15:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.713 17:15:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.713 17:15:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:50.713 17:15:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.713 17:15:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.713 17:15:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.713 17:15:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:50.971 17:15:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:50.971 17:15:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:50.971 17:15:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:50.971 17:15:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.971 17:15:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.971 17:15:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:50.971 17:15:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.971 17:15:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.971 17:15:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.971 17:15:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.971 17:15:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.229 17:15:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:51.229 17:15:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:51.229 17:15:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.229 17:15:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.229 17:15:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.229 17:15:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.229 17:15:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:51.229 17:15:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.229 17:15:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.229 17:15:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.229 17:15:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.229 17:15:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.229 17:15:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.488 17:15:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:51.488 [2024-12-09 17:15:17.949548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.488 [2024-12-09 17:15:17.984938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.488 [2024-12-09 17:15:17.984938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.488 [2024-12-09 17:15:18.025370] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:51.488 [2024-12-09 17:15:18.025408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.768 17:15:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.768 17:15:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:54.768 spdk_app_start Round 1 00:04:54.768 17:15:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1715875 /var/tmp/spdk-nbd.sock 00:04:54.768 17:15:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1715875 ']' 00:04:54.768 17:15:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.768 17:15:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.768 17:15:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.768 17:15:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.768 17:15:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.768 17:15:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.768 17:15:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:54.768 17:15:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.768 Malloc0 00:04:54.768 17:15:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.026 Malloc1 00:04:55.026 17:15:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.026 17:15:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.284 /dev/nbd0 00:04:55.284 17:15:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.284 17:15:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.284 1+0 records in 00:04:55.284 1+0 records out 00:04:55.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232987 s, 17.6 MB/s 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:55.284 17:15:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:55.284 17:15:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.284 17:15:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.284 17:15:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.542 /dev/nbd1 00:04:55.542 17:15:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.542 17:15:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.542 1+0 records in 00:04:55.542 1+0 records out 00:04:55.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228812 s, 17.9 MB/s 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:55.542 17:15:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:55.542 17:15:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.542 17:15:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.542 17:15:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.542 17:15:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.542 17:15:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.801 { 00:04:55.801 "nbd_device": "/dev/nbd0", 00:04:55.801 "bdev_name": "Malloc0" 00:04:55.801 }, 00:04:55.801 { 00:04:55.801 "nbd_device": "/dev/nbd1", 00:04:55.801 "bdev_name": "Malloc1" 00:04:55.801 } 00:04:55.801 ]' 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.801 { 00:04:55.801 "nbd_device": "/dev/nbd0", 00:04:55.801 "bdev_name": "Malloc0" 00:04:55.801 }, 00:04:55.801 { 00:04:55.801 "nbd_device": "/dev/nbd1", 00:04:55.801 "bdev_name": "Malloc1" 00:04:55.801 } 00:04:55.801 ]' 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.801 /dev/nbd1' 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.801 /dev/nbd1' 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.801 256+0 records in 00:04:55.801 256+0 records out 00:04:55.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100422 s, 104 MB/s 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.801 256+0 records in 00:04:55.801 256+0 records out 00:04:55.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137679 s, 76.2 MB/s 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.801 256+0 records in 00:04:55.801 256+0 records out 00:04:55.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150043 s, 69.9 MB/s 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.801 17:15:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.059 17:15:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.059 17:15:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.059 17:15:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.059 17:15:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.059 17:15:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.059 17:15:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.059 17:15:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.059 17:15:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.059 17:15:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.059 17:15:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.317 17:15:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.317 17:15:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.317 17:15:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.317 17:15:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.317 17:15:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.317 17:15:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.317 17:15:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.317 17:15:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.317 17:15:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.317 17:15:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.317 17:15:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.574 17:15:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.574 17:15:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.574 17:15:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.575 17:15:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.575 17:15:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.575 17:15:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.575 17:15:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:56.575 17:15:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.575 17:15:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.575 17:15:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.575 17:15:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.575 17:15:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.575 17:15:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.833 17:15:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.833 [2024-12-09 17:15:23.267764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.833 [2024-12-09 17:15:23.303036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.833 [2024-12-09 17:15:23.303036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.833 [2024-12-09 17:15:23.344026] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.833 [2024-12-09 17:15:23.344065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.117 17:15:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.117 17:15:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:00.117 spdk_app_start Round 2 00:05:00.117 17:15:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1715875 /var/tmp/spdk-nbd.sock 00:05:00.117 17:15:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1715875 ']' 00:05:00.117 17:15:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.117 17:15:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.117 17:15:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.117 17:15:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.117 17:15:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.117 17:15:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.117 17:15:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:00.117 17:15:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.117 Malloc0 00:05:00.117 17:15:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.375 Malloc1 00:05:00.375 17:15:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.375 17:15:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.633 /dev/nbd0 00:05:00.633 17:15:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.633 17:15:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.633 17:15:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:00.633 17:15:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:00.633 17:15:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:00.633 17:15:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:00.633 17:15:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:00.633 17:15:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:00.633 17:15:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:00.633 17:15:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:00.633 17:15:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.633 1+0 records in 00:05:00.633 1+0 records out 00:05:00.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231942 s, 17.7 MB/s 00:05:00.633 17:15:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.633 17:15:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:00.633 17:15:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.633 17:15:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:00.633 17:15:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:00.633 17:15:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.633 17:15:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.633 17:15:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.891 /dev/nbd1 00:05:00.891 17:15:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.891 17:15:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.891 1+0 records in 00:05:00.891 1+0 records out 00:05:00.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022676 s, 18.1 MB/s 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:00.891 17:15:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:00.891 17:15:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.891 17:15:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.891 17:15:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.891 17:15:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.891 17:15:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.149 { 00:05:01.149 "nbd_device": "/dev/nbd0", 00:05:01.149 "bdev_name": "Malloc0" 00:05:01.149 }, 00:05:01.149 { 00:05:01.149 "nbd_device": "/dev/nbd1", 00:05:01.149 "bdev_name": "Malloc1" 00:05:01.149 } 00:05:01.149 ]' 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.149 { 00:05:01.149 "nbd_device": "/dev/nbd0", 00:05:01.149 "bdev_name": "Malloc0" 00:05:01.149 }, 00:05:01.149 { 00:05:01.149 "nbd_device": "/dev/nbd1", 00:05:01.149 "bdev_name": "Malloc1" 00:05:01.149 } 00:05:01.149 ]' 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.149 /dev/nbd1' 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.149 /dev/nbd1' 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.149 256+0 records in 00:05:01.149 256+0 records out 00:05:01.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106532 s, 98.4 MB/s 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.149 256+0 records in 00:05:01.149 256+0 records out 00:05:01.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140055 s, 74.9 MB/s 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.149 256+0 records in 00:05:01.149 256+0 records out 00:05:01.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014793 s, 70.9 MB/s 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.149 17:15:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.407 17:15:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.407 17:15:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.407 17:15:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.407 17:15:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.407 17:15:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.407 17:15:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.407 17:15:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.407 17:15:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.407 17:15:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.407 17:15:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.407 17:15:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.665 17:15:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.665 17:15:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.665 17:15:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.665 17:15:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.665 17:15:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.665 17:15:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.665 17:15:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.665 17:15:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.665 17:15:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.665 17:15:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.665 17:15:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.665 17:15:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.665 17:15:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.923 17:15:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.923 17:15:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.923 17:15:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.923 17:15:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.923 17:15:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.923 17:15:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.923 17:15:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.923 17:15:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.923 17:15:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.923 17:15:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.923 17:15:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.182 [2024-12-09 17:15:28.581868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.182 [2024-12-09 17:15:28.617089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.182 [2024-12-09 17:15:28.617090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.182 [2024-12-09 17:15:28.657491] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.182 [2024-12-09 17:15:28.657531] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.462 17:15:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1715875 /var/tmp/spdk-nbd.sock 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1715875 ']' 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:05.462 17:15:31 event.app_repeat -- event/event.sh@39 -- # killprocess 1715875 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1715875 ']' 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1715875 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1715875 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1715875' 00:05:05.462 killing process with pid 1715875 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1715875 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1715875 00:05:05.462 spdk_app_start is called in Round 0. 00:05:05.462 Shutdown signal received, stop current app iteration 00:05:05.462 Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 reinitialization... 00:05:05.462 spdk_app_start is called in Round 1. 00:05:05.462 Shutdown signal received, stop current app iteration 00:05:05.462 Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 reinitialization... 00:05:05.462 spdk_app_start is called in Round 2. 00:05:05.462 Shutdown signal received, stop current app iteration 00:05:05.462 Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 reinitialization... 00:05:05.462 spdk_app_start is called in Round 3. 00:05:05.462 Shutdown signal received, stop current app iteration 00:05:05.462 17:15:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:05.462 17:15:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:05.462 00:05:05.462 real 0m16.431s 00:05:05.462 user 0m36.198s 00:05:05.462 sys 0m2.532s 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.462 17:15:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.462 ************************************ 00:05:05.462 END TEST app_repeat 00:05:05.462 ************************************ 00:05:05.462 17:15:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:05.462 17:15:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:05.462 17:15:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.462 17:15:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.462 17:15:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.462 ************************************ 00:05:05.462 START TEST cpu_locks 00:05:05.462 ************************************ 00:05:05.462 17:15:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:05.462 * Looking for test storage... 00:05:05.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:05.462 17:15:31 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.462 17:15:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.462 17:15:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.721 17:15:32 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.721 17:15:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:05.721 17:15:32 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.721 17:15:32 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.721 --rc genhtml_branch_coverage=1 00:05:05.721 --rc genhtml_function_coverage=1 00:05:05.721 --rc genhtml_legend=1 00:05:05.721 --rc geninfo_all_blocks=1 00:05:05.721 --rc geninfo_unexecuted_blocks=1 00:05:05.721 00:05:05.721 ' 00:05:05.721 17:15:32 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.721 --rc genhtml_branch_coverage=1 00:05:05.721 --rc genhtml_function_coverage=1 00:05:05.721 --rc genhtml_legend=1 00:05:05.721 --rc geninfo_all_blocks=1 00:05:05.721 --rc geninfo_unexecuted_blocks=1 00:05:05.721 00:05:05.721 ' 00:05:05.721 17:15:32 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.721 --rc genhtml_branch_coverage=1 00:05:05.721 --rc genhtml_function_coverage=1 00:05:05.721 --rc genhtml_legend=1 00:05:05.721 --rc geninfo_all_blocks=1 00:05:05.721 --rc geninfo_unexecuted_blocks=1 00:05:05.721 00:05:05.721 ' 00:05:05.722 17:15:32 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.722 --rc genhtml_branch_coverage=1 00:05:05.722 --rc genhtml_function_coverage=1 00:05:05.722 --rc genhtml_legend=1 00:05:05.722 --rc geninfo_all_blocks=1 00:05:05.722 --rc geninfo_unexecuted_blocks=1 00:05:05.722 00:05:05.722 ' 00:05:05.722 17:15:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:05.722 17:15:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:05.722 17:15:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:05.722 17:15:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:05.722 17:15:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.722 17:15:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.722 17:15:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.722 ************************************ 00:05:05.722 START TEST default_locks 00:05:05.722 ************************************ 00:05:05.722 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:05.722 17:15:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1718889 00:05:05.722 17:15:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1718889 00:05:05.722 17:15:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.722 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1718889 ']' 00:05:05.722 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.722 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.722 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.722 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.722 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.722 [2024-12-09 17:15:32.144369] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:05.722 [2024-12-09 17:15:32.144417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1718889 ] 00:05:05.722 [2024-12-09 17:15:32.218650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.722 [2024-12-09 17:15:32.259116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.980 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.980 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:05.980 17:15:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1718889 00:05:05.980 17:15:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1718889 00:05:05.981 17:15:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.239 lslocks: write error 00:05:06.239 17:15:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1718889 00:05:06.239 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1718889 ']' 00:05:06.239 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1718889 00:05:06.239 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:06.239 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.239 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1718889 00:05:06.239 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.239 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.239 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1718889' 00:05:06.239 killing process with pid 1718889 00:05:06.239 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1718889 00:05:06.239 17:15:32 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1718889 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1718889 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1718889 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1718889 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1718889 ']' 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.497 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1718889) - No such process 00:05:06.756 ERROR: process (pid: 1718889) is no longer running 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:06.756 00:05:06.756 real 0m0.949s 00:05:06.756 user 0m0.888s 00:05:06.756 sys 0m0.446s 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.756 17:15:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.756 ************************************ 00:05:06.756 END TEST default_locks 00:05:06.756 ************************************ 00:05:06.756 17:15:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:06.756 17:15:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.756 17:15:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.756 17:15:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.756 ************************************ 00:05:06.756 START TEST default_locks_via_rpc 00:05:06.756 ************************************ 00:05:06.756 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:06.756 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1719051 00:05:06.756 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1719051 00:05:06.756 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.756 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1719051 ']' 00:05:06.756 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.756 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.756 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.756 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.756 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.756 [2024-12-09 17:15:33.163388] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:06.756 [2024-12-09 17:15:33.163430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719051 ] 00:05:06.756 [2024-12-09 17:15:33.235630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.756 [2024-12-09 17:15:33.275899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1719051 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1719051 00:05:07.015 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.582 17:15:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1719051 00:05:07.582 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1719051 ']' 00:05:07.582 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1719051 00:05:07.582 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:07.582 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.582 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1719051 00:05:07.582 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.582 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.582 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1719051' 00:05:07.582 killing process with pid 1719051 00:05:07.582 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1719051 00:05:07.582 17:15:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1719051 00:05:07.841 00:05:07.841 real 0m1.165s 00:05:07.841 user 0m1.118s 00:05:07.841 sys 0m0.518s 00:05:07.841 17:15:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.841 17:15:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.841 ************************************ 00:05:07.841 END TEST default_locks_via_rpc 00:05:07.841 ************************************ 00:05:07.841 17:15:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:07.841 17:15:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.841 17:15:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.841 17:15:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.841 ************************************ 00:05:07.841 START TEST non_locking_app_on_locked_coremask 00:05:07.841 ************************************ 00:05:07.841 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:07.841 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1719306 00:05:07.841 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1719306 /var/tmp/spdk.sock 00:05:07.841 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.841 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1719306 ']' 00:05:07.841 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.841 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.841 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.841 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.841 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.101 [2024-12-09 17:15:34.397709] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:08.101 [2024-12-09 17:15:34.397751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719306 ] 00:05:08.101 [2024-12-09 17:15:34.472939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.101 [2024-12-09 17:15:34.513517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.359 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.360 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.360 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1719353 00:05:08.360 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1719353 /var/tmp/spdk2.sock 00:05:08.360 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:08.360 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1719353 ']' 00:05:08.360 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.360 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.360 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.360 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.360 17:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.360 [2024-12-09 17:15:34.770920] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:08.360 [2024-12-09 17:15:34.770968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719353 ] 00:05:08.360 [2024-12-09 17:15:34.854863] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:08.360 [2024-12-09 17:15:34.854890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.618 [2024-12-09 17:15:34.942314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.183 17:15:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.183 17:15:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:09.183 17:15:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1719306 00:05:09.183 17:15:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.183 17:15:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1719306 00:05:09.750 lslocks: write error 00:05:09.750 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1719306 00:05:09.750 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1719306 ']' 00:05:09.750 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1719306 00:05:09.750 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:09.750 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.750 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1719306 00:05:09.750 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.750 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.750 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1719306' 00:05:09.750 killing process with pid 1719306 00:05:09.750 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1719306 00:05:09.750 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1719306 00:05:10.696 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1719353 00:05:10.696 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1719353 ']' 00:05:10.696 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1719353 00:05:10.696 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:10.696 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.696 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1719353 00:05:10.696 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.696 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.696 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1719353' 00:05:10.696 killing process with pid 1719353 00:05:10.696 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1719353 00:05:10.696 17:15:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1719353 00:05:10.696 00:05:10.696 real 0m2.800s 00:05:10.696 user 0m2.942s 00:05:10.696 sys 0m0.942s 00:05:10.696 17:15:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.696 17:15:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.696 ************************************ 00:05:10.696 END TEST non_locking_app_on_locked_coremask 00:05:10.696 ************************************ 00:05:10.696 17:15:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:10.696 17:15:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.696 17:15:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.696 17:15:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.696 ************************************ 00:05:10.696 START TEST locking_app_on_unlocked_coremask 00:05:10.696 ************************************ 00:05:10.696 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:10.696 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1719788 00:05:10.696 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1719788 /var/tmp/spdk.sock 00:05:10.696 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:10.696 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1719788 ']' 00:05:10.696 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.696 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.696 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.697 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.697 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.959 [2024-12-09 17:15:37.270157] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:10.959 [2024-12-09 17:15:37.270215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719788 ] 00:05:10.959 [2024-12-09 17:15:37.342895] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.959 [2024-12-09 17:15:37.342921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.959 [2024-12-09 17:15:37.383704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.218 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.218 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:11.218 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1719948 00:05:11.218 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1719948 /var/tmp/spdk2.sock 00:05:11.218 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:11.218 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1719948 ']' 00:05:11.218 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.218 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.218 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.218 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.218 17:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.218 [2024-12-09 17:15:37.648970] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:11.218 [2024-12-09 17:15:37.649020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719948 ] 00:05:11.218 [2024-12-09 17:15:37.736405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.477 [2024-12-09 17:15:37.823027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.044 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.044 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:12.044 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1719948 00:05:12.044 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1719948 00:05:12.044 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.610 lslocks: write error 00:05:12.610 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1719788 00:05:12.610 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1719788 ']' 00:05:12.610 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1719788 00:05:12.610 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:12.610 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.610 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1719788 00:05:12.610 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.610 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.610 17:15:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1719788' 00:05:12.610 killing process with pid 1719788 00:05:12.610 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1719788 00:05:12.610 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1719788 00:05:13.178 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1719948 00:05:13.178 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1719948 ']' 00:05:13.178 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1719948 00:05:13.178 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:13.178 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.178 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1719948 00:05:13.178 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.178 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.178 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1719948' 00:05:13.178 killing process with pid 1719948 00:05:13.178 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1719948 00:05:13.178 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1719948 00:05:13.437 00:05:13.437 real 0m2.721s 00:05:13.437 user 0m2.855s 00:05:13.437 sys 0m0.923s 00:05:13.437 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.437 17:15:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.437 ************************************ 00:05:13.437 END TEST locking_app_on_unlocked_coremask 00:05:13.437 ************************************ 00:05:13.437 17:15:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:13.437 17:15:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.437 17:15:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.437 17:15:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.696 ************************************ 00:05:13.696 START TEST locking_app_on_locked_coremask 00:05:13.696 ************************************ 00:05:13.696 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:13.696 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1720277 00:05:13.696 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1720277 /var/tmp/spdk.sock 00:05:13.696 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.696 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1720277 ']' 00:05:13.696 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.696 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.696 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.696 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.696 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.696 [2024-12-09 17:15:40.058440] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:13.696 [2024-12-09 17:15:40.058496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720277 ] 00:05:13.696 [2024-12-09 17:15:40.133705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.696 [2024-12-09 17:15:40.173430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1720495 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1720495 /var/tmp/spdk2.sock 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1720495 /var/tmp/spdk2.sock 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1720495 /var/tmp/spdk2.sock 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1720495 ']' 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.954 17:15:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.954 [2024-12-09 17:15:40.449022] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:13.954 [2024-12-09 17:15:40.449069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720495 ] 00:05:14.212 [2024-12-09 17:15:40.537506] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1720277 has claimed it. 00:05:14.212 [2024-12-09 17:15:40.537547] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:14.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1720495) - No such process 00:05:14.779 ERROR: process (pid: 1720495) is no longer running 00:05:14.779 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.779 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:14.779 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:14.779 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:14.779 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:14.779 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:14.779 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1720277 00:05:14.779 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1720277 00:05:14.779 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.037 lslocks: write error 00:05:15.037 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1720277 00:05:15.037 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1720277 ']' 00:05:15.037 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1720277 00:05:15.037 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:15.037 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.037 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1720277 00:05:15.296 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.296 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.296 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1720277' 00:05:15.296 killing process with pid 1720277 00:05:15.296 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1720277 00:05:15.296 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1720277 00:05:15.555 00:05:15.555 real 0m1.884s 00:05:15.555 user 0m2.021s 00:05:15.555 sys 0m0.639s 00:05:15.555 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.555 17:15:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.555 ************************************ 00:05:15.555 END TEST locking_app_on_locked_coremask 00:05:15.555 ************************************ 00:05:15.555 17:15:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:15.555 17:15:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.555 17:15:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.555 17:15:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.555 ************************************ 00:05:15.555 START TEST locking_overlapped_coremask 00:05:15.555 ************************************ 00:05:15.555 17:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:15.555 17:15:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1720747 00:05:15.555 17:15:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1720747 /var/tmp/spdk.sock 00:05:15.555 17:15:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:15.555 17:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1720747 ']' 00:05:15.555 17:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.555 17:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.555 17:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.555 17:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.555 17:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.555 [2024-12-09 17:15:42.011347] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:15.555 [2024-12-09 17:15:42.011388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720747 ] 00:05:15.555 [2024-12-09 17:15:42.083018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.813 [2024-12-09 17:15:42.122193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.813 [2024-12-09 17:15:42.122264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.813 [2024-12-09 17:15:42.122264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1720760 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1720760 /var/tmp/spdk2.sock 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1720760 /var/tmp/spdk2.sock 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1720760 /var/tmp/spdk2.sock 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1720760 ']' 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.813 17:15:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.070 [2024-12-09 17:15:42.396171] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:16.070 [2024-12-09 17:15:42.396231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720760 ] 00:05:16.070 [2024-12-09 17:15:42.488600] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1720747 has claimed it. 00:05:16.070 [2024-12-09 17:15:42.488639] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:16.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1720760) - No such process 00:05:16.634 ERROR: process (pid: 1720760) is no longer running 00:05:16.634 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.634 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:16.634 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1720747 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1720747 ']' 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1720747 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1720747 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1720747' 00:05:16.635 killing process with pid 1720747 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1720747 00:05:16.635 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1720747 00:05:16.893 00:05:16.893 real 0m1.432s 00:05:16.893 user 0m3.948s 00:05:16.893 sys 0m0.396s 00:05:16.893 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.893 17:15:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.893 ************************************ 00:05:16.893 END TEST locking_overlapped_coremask 00:05:16.893 ************************************ 00:05:16.893 17:15:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:16.893 17:15:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.893 17:15:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.893 17:15:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.152 ************************************ 00:05:17.152 START TEST locking_overlapped_coremask_via_rpc 00:05:17.152 ************************************ 00:05:17.152 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:17.152 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1721012 00:05:17.152 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:17.152 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1721012 /var/tmp/spdk.sock 00:05:17.152 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1721012 ']' 00:05:17.152 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.152 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.152 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.152 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.152 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.152 [2024-12-09 17:15:43.512234] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:17.152 [2024-12-09 17:15:43.512276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721012 ] 00:05:17.152 [2024-12-09 17:15:43.586406] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.152 [2024-12-09 17:15:43.586431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.152 [2024-12-09 17:15:43.629318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.152 [2024-12-09 17:15:43.629410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.152 [2024-12-09 17:15:43.629411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.410 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.410 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.410 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1721024 00:05:17.410 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1721024 /var/tmp/spdk2.sock 00:05:17.410 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:17.410 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1721024 ']' 00:05:17.410 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.410 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.410 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.410 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.410 17:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.410 [2024-12-09 17:15:43.896084] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:17.410 [2024-12-09 17:15:43.896130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721024 ] 00:05:17.669 [2024-12-09 17:15:43.984757] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.669 [2024-12-09 17:15:43.984780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.669 [2024-12-09 17:15:44.067156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.669 [2024-12-09 17:15:44.070211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.669 [2024-12-09 17:15:44.070211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:18.233 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.233 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.233 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.234 [2024-12-09 17:15:44.728239] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1721012 has claimed it. 00:05:18.234 request: 00:05:18.234 { 00:05:18.234 "method": "framework_enable_cpumask_locks", 00:05:18.234 "req_id": 1 00:05:18.234 } 00:05:18.234 Got JSON-RPC error response 00:05:18.234 response: 00:05:18.234 { 00:05:18.234 "code": -32603, 00:05:18.234 "message": "Failed to claim CPU core: 2" 00:05:18.234 } 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1721012 /var/tmp/spdk.sock 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1721012 ']' 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.234 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.492 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.492 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.492 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1721024 /var/tmp/spdk2.sock 00:05:18.492 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1721024 ']' 00:05:18.492 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.492 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.492 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.492 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.492 17:15:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.759 17:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.759 17:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.759 17:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:18.759 17:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:18.759 17:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:18.759 17:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:18.759 00:05:18.759 real 0m1.684s 00:05:18.759 user 0m0.796s 00:05:18.759 sys 0m0.143s 00:05:18.759 17:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.759 17:15:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.759 ************************************ 00:05:18.759 END TEST locking_overlapped_coremask_via_rpc 00:05:18.759 ************************************ 00:05:18.759 17:15:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:18.759 17:15:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1721012 ]] 00:05:18.759 17:15:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1721012 00:05:18.759 17:15:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1721012 ']' 00:05:18.759 17:15:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1721012 00:05:18.759 17:15:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:18.759 17:15:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.759 17:15:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1721012 00:05:18.759 17:15:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.759 17:15:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.759 17:15:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1721012' 00:05:18.759 killing process with pid 1721012 00:05:18.759 17:15:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1721012 00:05:18.759 17:15:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1721012 00:05:19.019 17:15:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1721024 ]] 00:05:19.019 17:15:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1721024 00:05:19.019 17:15:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1721024 ']' 00:05:19.019 17:15:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1721024 00:05:19.019 17:15:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:19.019 17:15:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.019 17:15:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1721024 00:05:19.278 17:15:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:19.278 17:15:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:19.278 17:15:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1721024' 00:05:19.278 killing process with pid 1721024 00:05:19.278 17:15:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1721024 00:05:19.278 17:15:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1721024 00:05:19.537 17:15:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:19.537 17:15:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:19.537 17:15:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1721012 ]] 00:05:19.537 17:15:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1721012 00:05:19.537 17:15:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1721012 ']' 00:05:19.537 17:15:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1721012 00:05:19.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1721012) - No such process 00:05:19.537 17:15:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1721012 is not found' 00:05:19.537 Process with pid 1721012 is not found 00:05:19.537 17:15:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1721024 ]] 00:05:19.537 17:15:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1721024 00:05:19.537 17:15:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1721024 ']' 00:05:19.537 17:15:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1721024 00:05:19.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1721024) - No such process 00:05:19.537 17:15:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1721024 is not found' 00:05:19.537 Process with pid 1721024 is not found 00:05:19.537 17:15:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:19.537 00:05:19.537 real 0m14.027s 00:05:19.537 user 0m24.229s 00:05:19.537 sys 0m4.952s 00:05:19.537 17:15:45 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.537 17:15:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.537 ************************************ 00:05:19.537 END TEST cpu_locks 00:05:19.537 ************************************ 00:05:19.537 00:05:19.537 real 0m39.040s 00:05:19.537 user 1m14.809s 00:05:19.537 sys 0m8.451s 00:05:19.537 17:15:45 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.537 17:15:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.537 ************************************ 00:05:19.537 END TEST event 00:05:19.537 ************************************ 00:05:19.537 17:15:45 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:19.537 17:15:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.537 17:15:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.537 17:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:19.537 ************************************ 00:05:19.537 START TEST thread 00:05:19.537 ************************************ 00:05:19.537 17:15:46 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:19.796 * Looking for test storage... 00:05:19.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:19.796 17:15:46 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.796 17:15:46 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.796 17:15:46 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.797 17:15:46 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.797 17:15:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.797 17:15:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.797 17:15:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.797 17:15:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.797 17:15:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.797 17:15:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.797 17:15:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.797 17:15:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.797 17:15:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.797 17:15:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.797 17:15:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.797 17:15:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:19.797 17:15:46 thread -- scripts/common.sh@345 -- # : 1 00:05:19.797 17:15:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.797 17:15:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.797 17:15:46 thread -- scripts/common.sh@365 -- # decimal 1 00:05:19.797 17:15:46 thread -- scripts/common.sh@353 -- # local d=1 00:05:19.797 17:15:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.797 17:15:46 thread -- scripts/common.sh@355 -- # echo 1 00:05:19.797 17:15:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.797 17:15:46 thread -- scripts/common.sh@366 -- # decimal 2 00:05:19.797 17:15:46 thread -- scripts/common.sh@353 -- # local d=2 00:05:19.797 17:15:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.797 17:15:46 thread -- scripts/common.sh@355 -- # echo 2 00:05:19.797 17:15:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.797 17:15:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.797 17:15:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.797 17:15:46 thread -- scripts/common.sh@368 -- # return 0 00:05:19.797 17:15:46 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.797 17:15:46 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.797 --rc genhtml_branch_coverage=1 00:05:19.797 --rc genhtml_function_coverage=1 00:05:19.797 --rc genhtml_legend=1 00:05:19.797 --rc geninfo_all_blocks=1 00:05:19.797 --rc geninfo_unexecuted_blocks=1 00:05:19.797 00:05:19.797 ' 00:05:19.797 17:15:46 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.797 --rc genhtml_branch_coverage=1 00:05:19.797 --rc genhtml_function_coverage=1 00:05:19.797 --rc genhtml_legend=1 00:05:19.797 --rc geninfo_all_blocks=1 00:05:19.797 --rc geninfo_unexecuted_blocks=1 00:05:19.797 00:05:19.797 ' 00:05:19.797 17:15:46 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.797 --rc genhtml_branch_coverage=1 00:05:19.797 --rc genhtml_function_coverage=1 00:05:19.797 --rc genhtml_legend=1 00:05:19.797 --rc geninfo_all_blocks=1 00:05:19.797 --rc geninfo_unexecuted_blocks=1 00:05:19.797 00:05:19.797 ' 00:05:19.797 17:15:46 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.797 --rc genhtml_branch_coverage=1 00:05:19.797 --rc genhtml_function_coverage=1 00:05:19.797 --rc genhtml_legend=1 00:05:19.797 --rc geninfo_all_blocks=1 00:05:19.797 --rc geninfo_unexecuted_blocks=1 00:05:19.797 00:05:19.797 ' 00:05:19.797 17:15:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.797 17:15:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:19.797 17:15:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.797 17:15:46 thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.797 ************************************ 00:05:19.797 START TEST thread_poller_perf 00:05:19.797 ************************************ 00:05:19.797 17:15:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.797 [2024-12-09 17:15:46.243459] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:19.797 [2024-12-09 17:15:46.243527] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721569 ] 00:05:19.797 [2024-12-09 17:15:46.318536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.056 [2024-12-09 17:15:46.357785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.056 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:20.992 [2024-12-09T16:15:47.532Z] ====================================== 00:05:20.992 [2024-12-09T16:15:47.532Z] busy:2107973252 (cyc) 00:05:20.992 [2024-12-09T16:15:47.532Z] total_run_count: 419000 00:05:20.992 [2024-12-09T16:15:47.532Z] tsc_hz: 2100000000 (cyc) 00:05:20.992 [2024-12-09T16:15:47.532Z] ====================================== 00:05:20.992 [2024-12-09T16:15:47.532Z] poller_cost: 5030 (cyc), 2395 (nsec) 00:05:20.992 00:05:20.992 real 0m1.178s 00:05:20.992 user 0m1.094s 00:05:20.992 sys 0m0.080s 00:05:20.992 17:15:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.992 17:15:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.992 ************************************ 00:05:20.992 END TEST thread_poller_perf 00:05:20.992 ************************************ 00:05:20.992 17:15:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.992 17:15:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:20.992 17:15:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.992 17:15:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.992 ************************************ 00:05:20.992 START TEST thread_poller_perf 00:05:20.992 ************************************ 00:05:20.992 17:15:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.992 [2024-12-09 17:15:47.489515] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:20.992 [2024-12-09 17:15:47.489572] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721812 ] 00:05:21.251 [2024-12-09 17:15:47.569373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.251 [2024-12-09 17:15:47.607381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.251 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:22.190 [2024-12-09T16:15:48.730Z] ====================================== 00:05:22.190 [2024-12-09T16:15:48.730Z] busy:2101270074 (cyc) 00:05:22.190 [2024-12-09T16:15:48.730Z] total_run_count: 5063000 00:05:22.190 [2024-12-09T16:15:48.730Z] tsc_hz: 2100000000 (cyc) 00:05:22.190 [2024-12-09T16:15:48.730Z] ====================================== 00:05:22.190 [2024-12-09T16:15:48.730Z] poller_cost: 415 (cyc), 197 (nsec) 00:05:22.190 00:05:22.190 real 0m1.179s 00:05:22.190 user 0m1.101s 00:05:22.190 sys 0m0.074s 00:05:22.190 17:15:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.190 17:15:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.190 ************************************ 00:05:22.190 END TEST thread_poller_perf 00:05:22.190 ************************************ 00:05:22.190 17:15:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:22.190 00:05:22.190 real 0m2.666s 00:05:22.190 user 0m2.346s 00:05:22.190 sys 0m0.335s 00:05:22.190 17:15:48 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.190 17:15:48 thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.190 ************************************ 00:05:22.190 END TEST thread 00:05:22.190 ************************************ 00:05:22.190 17:15:48 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:22.190 17:15:48 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:22.190 17:15:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.190 17:15:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.190 17:15:48 -- common/autotest_common.sh@10 -- # set +x 00:05:22.525 ************************************ 00:05:22.525 START TEST app_cmdline 00:05:22.525 ************************************ 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:22.525 * Looking for test storage... 00:05:22.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.525 17:15:48 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:22.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.525 --rc genhtml_branch_coverage=1 00:05:22.525 --rc genhtml_function_coverage=1 00:05:22.525 --rc genhtml_legend=1 00:05:22.525 --rc geninfo_all_blocks=1 00:05:22.525 --rc geninfo_unexecuted_blocks=1 00:05:22.525 00:05:22.525 ' 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:22.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.525 --rc genhtml_branch_coverage=1 00:05:22.525 --rc genhtml_function_coverage=1 00:05:22.525 --rc genhtml_legend=1 00:05:22.525 --rc geninfo_all_blocks=1 00:05:22.525 --rc geninfo_unexecuted_blocks=1 00:05:22.525 00:05:22.525 ' 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:22.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.525 --rc genhtml_branch_coverage=1 00:05:22.525 --rc genhtml_function_coverage=1 00:05:22.525 --rc genhtml_legend=1 00:05:22.525 --rc geninfo_all_blocks=1 00:05:22.525 --rc geninfo_unexecuted_blocks=1 00:05:22.525 00:05:22.525 ' 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:22.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.525 --rc genhtml_branch_coverage=1 00:05:22.525 --rc genhtml_function_coverage=1 00:05:22.525 --rc genhtml_legend=1 00:05:22.525 --rc geninfo_all_blocks=1 00:05:22.525 --rc geninfo_unexecuted_blocks=1 00:05:22.525 00:05:22.525 ' 00:05:22.525 17:15:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:22.525 17:15:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1722108 00:05:22.525 17:15:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1722108 00:05:22.525 17:15:48 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1722108 ']' 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.525 17:15:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:22.525 [2024-12-09 17:15:48.980963] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:22.525 [2024-12-09 17:15:48.981012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722108 ] 00:05:22.808 [2024-12-09 17:15:49.056918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.808 [2024-12-09 17:15:49.098798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.808 17:15:49 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.808 17:15:49 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:22.808 17:15:49 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:23.067 { 00:05:23.067 "version": "SPDK v25.01-pre git sha1 608f2e392", 00:05:23.067 "fields": { 00:05:23.067 "major": 25, 00:05:23.067 "minor": 1, 00:05:23.067 "patch": 0, 00:05:23.067 "suffix": "-pre", 00:05:23.067 "commit": "608f2e392" 00:05:23.067 } 00:05:23.067 } 00:05:23.067 17:15:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:23.067 17:15:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:23.067 17:15:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:23.067 17:15:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:23.067 17:15:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:23.067 17:15:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.067 17:15:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.067 17:15:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:23.067 17:15:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:23.067 17:15:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:23.067 17:15:49 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:23.326 request: 00:05:23.326 { 00:05:23.326 "method": "env_dpdk_get_mem_stats", 00:05:23.326 "req_id": 1 00:05:23.326 } 00:05:23.326 Got JSON-RPC error response 00:05:23.326 response: 00:05:23.326 { 00:05:23.326 "code": -32601, 00:05:23.326 "message": "Method not found" 00:05:23.326 } 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.326 17:15:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1722108 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1722108 ']' 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1722108 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1722108 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1722108' 00:05:23.326 killing process with pid 1722108 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@973 -- # kill 1722108 00:05:23.326 17:15:49 app_cmdline -- common/autotest_common.sh@978 -- # wait 1722108 00:05:23.585 00:05:23.585 real 0m1.313s 00:05:23.585 user 0m1.506s 00:05:23.585 sys 0m0.449s 00:05:23.585 17:15:50 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.585 17:15:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.585 ************************************ 00:05:23.585 END TEST app_cmdline 00:05:23.585 ************************************ 00:05:23.585 17:15:50 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:23.585 17:15:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.585 17:15:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.585 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:23.845 ************************************ 00:05:23.845 START TEST version 00:05:23.845 ************************************ 00:05:23.845 17:15:50 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:23.845 * Looking for test storage... 00:05:23.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:23.845 17:15:50 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.845 17:15:50 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.845 17:15:50 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.845 17:15:50 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.845 17:15:50 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.845 17:15:50 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.845 17:15:50 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.845 17:15:50 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.845 17:15:50 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.845 17:15:50 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.845 17:15:50 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.845 17:15:50 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.845 17:15:50 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.845 17:15:50 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.845 17:15:50 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.845 17:15:50 version -- scripts/common.sh@344 -- # case "$op" in 00:05:23.845 17:15:50 version -- scripts/common.sh@345 -- # : 1 00:05:23.845 17:15:50 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.845 17:15:50 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.845 17:15:50 version -- scripts/common.sh@365 -- # decimal 1 00:05:23.845 17:15:50 version -- scripts/common.sh@353 -- # local d=1 00:05:23.845 17:15:50 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.845 17:15:50 version -- scripts/common.sh@355 -- # echo 1 00:05:23.845 17:15:50 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.845 17:15:50 version -- scripts/common.sh@366 -- # decimal 2 00:05:23.845 17:15:50 version -- scripts/common.sh@353 -- # local d=2 00:05:23.845 17:15:50 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.845 17:15:50 version -- scripts/common.sh@355 -- # echo 2 00:05:23.845 17:15:50 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.845 17:15:50 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.845 17:15:50 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.845 17:15:50 version -- scripts/common.sh@368 -- # return 0 00:05:23.845 17:15:50 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.845 17:15:50 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.845 --rc genhtml_branch_coverage=1 00:05:23.845 --rc genhtml_function_coverage=1 00:05:23.845 --rc genhtml_legend=1 00:05:23.845 --rc geninfo_all_blocks=1 00:05:23.845 --rc geninfo_unexecuted_blocks=1 00:05:23.845 00:05:23.845 ' 00:05:23.845 17:15:50 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.845 --rc genhtml_branch_coverage=1 00:05:23.845 --rc genhtml_function_coverage=1 00:05:23.845 --rc genhtml_legend=1 00:05:23.845 --rc geninfo_all_blocks=1 00:05:23.845 --rc geninfo_unexecuted_blocks=1 00:05:23.845 00:05:23.845 ' 00:05:23.845 17:15:50 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.845 --rc genhtml_branch_coverage=1 00:05:23.845 --rc genhtml_function_coverage=1 00:05:23.845 --rc genhtml_legend=1 00:05:23.845 --rc geninfo_all_blocks=1 00:05:23.845 --rc geninfo_unexecuted_blocks=1 00:05:23.845 00:05:23.845 ' 00:05:23.845 17:15:50 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.845 --rc genhtml_branch_coverage=1 00:05:23.845 --rc genhtml_function_coverage=1 00:05:23.845 --rc genhtml_legend=1 00:05:23.845 --rc geninfo_all_blocks=1 00:05:23.845 --rc geninfo_unexecuted_blocks=1 00:05:23.845 00:05:23.845 ' 00:05:23.845 17:15:50 version -- app/version.sh@17 -- # get_header_version major 00:05:23.845 17:15:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.845 17:15:50 version -- app/version.sh@14 -- # cut -f2 00:05:23.845 17:15:50 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.845 17:15:50 version -- app/version.sh@17 -- # major=25 00:05:23.845 17:15:50 version -- app/version.sh@18 -- # get_header_version minor 00:05:23.845 17:15:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.845 17:15:50 version -- app/version.sh@14 -- # cut -f2 00:05:23.845 17:15:50 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.845 17:15:50 version -- app/version.sh@18 -- # minor=1 00:05:23.845 17:15:50 version -- app/version.sh@19 -- # get_header_version patch 00:05:23.845 17:15:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.845 17:15:50 version -- app/version.sh@14 -- # cut -f2 00:05:23.845 17:15:50 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.845 17:15:50 version -- app/version.sh@19 -- # patch=0 00:05:23.845 17:15:50 version -- app/version.sh@20 -- # get_header_version suffix 00:05:23.845 17:15:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.845 17:15:50 version -- app/version.sh@14 -- # cut -f2 00:05:23.845 17:15:50 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.845 17:15:50 version -- app/version.sh@20 -- # suffix=-pre 00:05:23.845 17:15:50 version -- app/version.sh@22 -- # version=25.1 00:05:23.845 17:15:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:23.845 17:15:50 version -- app/version.sh@28 -- # version=25.1rc0 00:05:23.845 17:15:50 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:23.845 17:15:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:23.845 17:15:50 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:23.845 17:15:50 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:23.845 00:05:23.845 real 0m0.248s 00:05:23.845 user 0m0.144s 00:05:23.845 sys 0m0.146s 00:05:23.845 17:15:50 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.845 17:15:50 version -- common/autotest_common.sh@10 -- # set +x 00:05:23.845 ************************************ 00:05:23.845 END TEST version 00:05:23.845 ************************************ 00:05:24.104 17:15:50 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:24.104 17:15:50 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:24.104 17:15:50 -- spdk/autotest.sh@194 -- # uname -s 00:05:24.104 17:15:50 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:24.104 17:15:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:24.104 17:15:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:24.104 17:15:50 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:24.104 17:15:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:24.104 17:15:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:24.104 17:15:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.104 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.104 17:15:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:24.104 17:15:50 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:24.104 17:15:50 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:24.104 17:15:50 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:24.104 17:15:50 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:24.104 17:15:50 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:24.104 17:15:50 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:24.104 17:15:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:24.104 17:15:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.104 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.104 ************************************ 00:05:24.104 START TEST nvmf_tcp 00:05:24.104 ************************************ 00:05:24.104 17:15:50 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:24.104 * Looking for test storage... 00:05:24.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:24.104 17:15:50 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.104 17:15:50 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.104 17:15:50 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.363 17:15:50 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:24.363 17:15:50 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:24.364 17:15:50 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.364 17:15:50 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:24.364 17:15:50 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.364 17:15:50 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.364 17:15:50 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.364 17:15:50 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:24.364 17:15:50 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.364 17:15:50 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.364 --rc genhtml_branch_coverage=1 00:05:24.364 --rc genhtml_function_coverage=1 00:05:24.364 --rc genhtml_legend=1 00:05:24.364 --rc geninfo_all_blocks=1 00:05:24.364 --rc geninfo_unexecuted_blocks=1 00:05:24.364 00:05:24.364 ' 00:05:24.364 17:15:50 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.364 --rc genhtml_branch_coverage=1 00:05:24.364 --rc genhtml_function_coverage=1 00:05:24.364 --rc genhtml_legend=1 00:05:24.364 --rc geninfo_all_blocks=1 00:05:24.364 --rc geninfo_unexecuted_blocks=1 00:05:24.364 00:05:24.364 ' 00:05:24.364 17:15:50 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.364 --rc genhtml_branch_coverage=1 00:05:24.364 --rc genhtml_function_coverage=1 00:05:24.364 --rc genhtml_legend=1 00:05:24.364 --rc geninfo_all_blocks=1 00:05:24.364 --rc geninfo_unexecuted_blocks=1 00:05:24.364 00:05:24.364 ' 00:05:24.364 17:15:50 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.364 --rc genhtml_branch_coverage=1 00:05:24.364 --rc genhtml_function_coverage=1 00:05:24.364 --rc genhtml_legend=1 00:05:24.364 --rc geninfo_all_blocks=1 00:05:24.364 --rc geninfo_unexecuted_blocks=1 00:05:24.364 00:05:24.364 ' 00:05:24.364 17:15:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:24.364 17:15:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:24.364 17:15:50 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:24.364 17:15:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:24.364 17:15:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.364 17:15:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.364 ************************************ 00:05:24.364 START TEST nvmf_target_core 00:05:24.364 ************************************ 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:24.364 * Looking for test storage... 00:05:24.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.364 --rc genhtml_branch_coverage=1 00:05:24.364 --rc genhtml_function_coverage=1 00:05:24.364 --rc genhtml_legend=1 00:05:24.364 --rc geninfo_all_blocks=1 00:05:24.364 --rc geninfo_unexecuted_blocks=1 00:05:24.364 00:05:24.364 ' 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.364 --rc genhtml_branch_coverage=1 00:05:24.364 --rc genhtml_function_coverage=1 00:05:24.364 --rc genhtml_legend=1 00:05:24.364 --rc geninfo_all_blocks=1 00:05:24.364 --rc geninfo_unexecuted_blocks=1 00:05:24.364 00:05:24.364 ' 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.364 --rc genhtml_branch_coverage=1 00:05:24.364 --rc genhtml_function_coverage=1 00:05:24.364 --rc genhtml_legend=1 00:05:24.364 --rc geninfo_all_blocks=1 00:05:24.364 --rc geninfo_unexecuted_blocks=1 00:05:24.364 00:05:24.364 ' 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.364 --rc genhtml_branch_coverage=1 00:05:24.364 --rc genhtml_function_coverage=1 00:05:24.364 --rc genhtml_legend=1 00:05:24.364 --rc geninfo_all_blocks=1 00:05:24.364 --rc geninfo_unexecuted_blocks=1 00:05:24.364 00:05:24.364 ' 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.364 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:24.624 ************************************ 00:05:24.624 START TEST nvmf_abort 00:05:24.624 ************************************ 00:05:24.624 17:15:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:24.624 * Looking for test storage... 00:05:24.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:24.624 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.624 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.624 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.624 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.624 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.624 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.624 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.624 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.624 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.625 --rc genhtml_branch_coverage=1 00:05:24.625 --rc genhtml_function_coverage=1 00:05:24.625 --rc genhtml_legend=1 00:05:24.625 --rc geninfo_all_blocks=1 00:05:24.625 --rc geninfo_unexecuted_blocks=1 00:05:24.625 00:05:24.625 ' 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.625 --rc genhtml_branch_coverage=1 00:05:24.625 --rc genhtml_function_coverage=1 00:05:24.625 --rc genhtml_legend=1 00:05:24.625 --rc geninfo_all_blocks=1 00:05:24.625 --rc geninfo_unexecuted_blocks=1 00:05:24.625 00:05:24.625 ' 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.625 --rc genhtml_branch_coverage=1 00:05:24.625 --rc genhtml_function_coverage=1 00:05:24.625 --rc genhtml_legend=1 00:05:24.625 --rc geninfo_all_blocks=1 00:05:24.625 --rc geninfo_unexecuted_blocks=1 00:05:24.625 00:05:24.625 ' 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.625 --rc genhtml_branch_coverage=1 00:05:24.625 --rc genhtml_function_coverage=1 00:05:24.625 --rc genhtml_legend=1 00:05:24.625 --rc geninfo_all_blocks=1 00:05:24.625 --rc geninfo_unexecuted_blocks=1 00:05:24.625 00:05:24.625 ' 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.625 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:24.885 17:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:31.457 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:31.457 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:31.457 Found net devices under 0000:af:00.0: cvl_0_0 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:31.457 Found net devices under 0000:af:00.1: cvl_0_1 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:31.457 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:31.458 17:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:31.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:31.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:05:31.458 00:05:31.458 --- 10.0.0.2 ping statistics --- 00:05:31.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:31.458 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:31.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:31.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:05:31.458 00:05:31.458 --- 10.0.0.1 ping statistics --- 00:05:31.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:31.458 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1725728 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1725728 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1725728 ']' 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.458 [2024-12-09 17:15:57.163092] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:31.458 [2024-12-09 17:15:57.163142] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:31.458 [2024-12-09 17:15:57.242652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.458 [2024-12-09 17:15:57.284788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:31.458 [2024-12-09 17:15:57.284823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:31.458 [2024-12-09 17:15:57.284830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:31.458 [2024-12-09 17:15:57.284835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:31.458 [2024-12-09 17:15:57.284840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:31.458 [2024-12-09 17:15:57.286182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.458 [2024-12-09 17:15:57.286258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.458 [2024-12-09 17:15:57.286259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.458 [2024-12-09 17:15:57.422960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.458 Malloc0 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.458 Delay0 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.458 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.459 [2024-12-09 17:15:57.497778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:31.459 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.459 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:31.459 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.459 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.459 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.459 17:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:31.459 [2024-12-09 17:15:57.593905] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:33.359 Initializing NVMe Controllers 00:05:33.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:33.359 controller IO queue size 128 less than required 00:05:33.359 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:33.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:33.359 Initialization complete. Launching workers. 00:05:33.359 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37698 00:05:33.359 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37763, failed to submit 62 00:05:33.359 success 37702, unsuccessful 61, failed 0 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:33.359 rmmod nvme_tcp 00:05:33.359 rmmod nvme_fabrics 00:05:33.359 rmmod nvme_keyring 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1725728 ']' 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1725728 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1725728 ']' 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1725728 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1725728 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1725728' 00:05:33.359 killing process with pid 1725728 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1725728 00:05:33.359 17:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1725728 00:05:33.618 17:16:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:33.618 17:16:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:33.618 17:16:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:33.618 17:16:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:33.618 17:16:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:33.618 17:16:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:33.618 17:16:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:33.618 17:16:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:33.619 17:16:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:33.619 17:16:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:33.619 17:16:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:33.619 17:16:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:36.155 00:05:36.155 real 0m11.140s 00:05:36.155 user 0m11.620s 00:05:36.155 sys 0m5.384s 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:36.155 ************************************ 00:05:36.155 END TEST nvmf_abort 00:05:36.155 ************************************ 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:36.155 ************************************ 00:05:36.155 START TEST nvmf_ns_hotplug_stress 00:05:36.155 ************************************ 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:36.155 * Looking for test storage... 00:05:36.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.155 --rc genhtml_branch_coverage=1 00:05:36.155 --rc genhtml_function_coverage=1 00:05:36.155 --rc genhtml_legend=1 00:05:36.155 --rc geninfo_all_blocks=1 00:05:36.155 --rc geninfo_unexecuted_blocks=1 00:05:36.155 00:05:36.155 ' 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.155 --rc genhtml_branch_coverage=1 00:05:36.155 --rc genhtml_function_coverage=1 00:05:36.155 --rc genhtml_legend=1 00:05:36.155 --rc geninfo_all_blocks=1 00:05:36.155 --rc geninfo_unexecuted_blocks=1 00:05:36.155 00:05:36.155 ' 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.155 --rc genhtml_branch_coverage=1 00:05:36.155 --rc genhtml_function_coverage=1 00:05:36.155 --rc genhtml_legend=1 00:05:36.155 --rc geninfo_all_blocks=1 00:05:36.155 --rc geninfo_unexecuted_blocks=1 00:05:36.155 00:05:36.155 ' 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.155 --rc genhtml_branch_coverage=1 00:05:36.155 --rc genhtml_function_coverage=1 00:05:36.155 --rc genhtml_legend=1 00:05:36.155 --rc geninfo_all_blocks=1 00:05:36.155 --rc geninfo_unexecuted_blocks=1 00:05:36.155 00:05:36.155 ' 00:05:36.155 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:36.156 17:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:42.736 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:42.736 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:42.736 Found net devices under 0000:af:00.0: cvl_0_0 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:42.736 Found net devices under 0000:af:00.1: cvl_0_1 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:42.736 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:42.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:42.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:05:42.737 00:05:42.737 --- 10.0.0.2 ping statistics --- 00:05:42.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:42.737 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:42.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:42.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:05:42.737 00:05:42.737 --- 10.0.0.1 ping statistics --- 00:05:42.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:42.737 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1729683 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1729683 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1729683 ']' 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:42.737 [2024-12-09 17:16:08.490302] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:05:42.737 [2024-12-09 17:16:08.490344] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:42.737 [2024-12-09 17:16:08.566708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.737 [2024-12-09 17:16:08.606398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:42.737 [2024-12-09 17:16:08.606432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:42.737 [2024-12-09 17:16:08.606439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:42.737 [2024-12-09 17:16:08.606445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:42.737 [2024-12-09 17:16:08.606450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:42.737 [2024-12-09 17:16:08.607758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.737 [2024-12-09 17:16:08.607863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.737 [2024-12-09 17:16:08.607864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:42.737 [2024-12-09 17:16:08.920998] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.737 17:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:42.737 17:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:42.996 [2024-12-09 17:16:09.306345] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:42.996 17:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:42.996 17:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:43.254 Malloc0 00:05:43.254 17:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:43.512 Delay0 00:05:43.512 17:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.770 17:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:44.029 NULL1 00:05:44.029 17:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:44.029 17:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1730147 00:05:44.029 17:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:44.029 17:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:44.029 17:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.288 17:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.546 17:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:44.546 17:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:44.804 true 00:05:44.804 17:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:44.804 17:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.062 17:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.062 17:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:45.062 17:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:45.320 true 00:05:45.320 17:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:45.320 17:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.578 17:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.837 17:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:45.837 17:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:46.096 true 00:05:46.096 17:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:46.096 17:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.096 17:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.355 17:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:46.355 17:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:46.613 true 00:05:46.613 17:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:46.613 17:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.872 17:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.130 17:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:47.130 17:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:47.130 true 00:05:47.388 17:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:47.388 17:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.388 17:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.647 17:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:47.647 17:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:47.905 true 00:05:47.905 17:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:47.905 17:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.164 17:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.422 17:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:48.422 17:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:48.422 true 00:05:48.422 17:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:48.422 17:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.681 17:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.940 17:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:48.940 17:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:49.198 true 00:05:49.198 17:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:49.199 17:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.457 17:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.715 17:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:49.715 17:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:49.715 true 00:05:49.715 17:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:49.715 17:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.973 17:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.230 17:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:50.230 17:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:50.488 true 00:05:50.488 17:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:50.488 17:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.747 17:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.005 17:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:51.005 17:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:51.005 true 00:05:51.005 17:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:51.005 17:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.262 17:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.520 17:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:51.520 17:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:51.777 true 00:05:51.777 17:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:51.777 17:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.042 17:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.042 17:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:52.042 17:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:52.301 true 00:05:52.301 17:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:52.301 17:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.559 17:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.816 17:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:52.816 17:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:52.816 true 00:05:52.816 17:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:52.816 17:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.074 17:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.332 17:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:53.332 17:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:53.591 true 00:05:53.591 17:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:53.591 17:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.848 17:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.848 17:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:53.848 17:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:54.106 true 00:05:54.106 17:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:54.106 17:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.364 17:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.621 17:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:54.621 17:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:54.879 true 00:05:54.879 17:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:54.879 17:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.879 17:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.137 17:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:55.137 17:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:55.395 true 00:05:55.395 17:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:55.395 17:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.653 17:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.911 17:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:55.911 17:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:56.168 true 00:05:56.168 17:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:56.168 17:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.168 17:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.426 17:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:56.426 17:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:56.684 true 00:05:56.684 17:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:56.684 17:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.941 17:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.199 17:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:57.199 17:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:57.199 true 00:05:57.199 17:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:57.199 17:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.457 17:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.715 17:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:57.715 17:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:57.973 true 00:05:57.973 17:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:57.973 17:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.231 17:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.489 17:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:58.489 17:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:58.489 true 00:05:58.489 17:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:58.489 17:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.748 17:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.006 17:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:59.006 17:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:59.264 true 00:05:59.264 17:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:59.264 17:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.522 17:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.522 17:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:59.522 17:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:59.780 true 00:05:59.780 17:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:05:59.780 17:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.037 17:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.295 17:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:00.295 17:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:00.553 true 00:06:00.553 17:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:00.553 17:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.810 17:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.810 17:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:00.810 17:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:01.068 true 00:06:01.068 17:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:01.068 17:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.326 17:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.587 17:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:01.587 17:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:01.587 true 00:06:01.844 17:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:01.844 17:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.844 17:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.102 17:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:02.102 17:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:02.360 true 00:06:02.360 17:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:02.360 17:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.618 17:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.876 17:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:02.876 17:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:02.876 true 00:06:02.876 17:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:02.876 17:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.134 17:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.393 17:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:03.393 17:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:03.651 true 00:06:03.651 17:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:03.651 17:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.909 17:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.167 17:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:04.167 17:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:04.167 true 00:06:04.167 17:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:04.167 17:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.425 17:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.683 17:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:04.683 17:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:04.941 true 00:06:04.941 17:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:04.941 17:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.199 17:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.458 17:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:05.458 17:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:05.458 true 00:06:05.458 17:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:05.716 17:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.716 17:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.975 17:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:05.975 17:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:06.233 true 00:06:06.233 17:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:06.233 17:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.491 17:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.750 17:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:06.750 17:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:06.750 true 00:06:07.007 17:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:07.007 17:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.008 17:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.266 17:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:07.266 17:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:07.524 true 00:06:07.524 17:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:07.524 17:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.782 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.041 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:08.041 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:08.041 true 00:06:08.300 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:08.300 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.300 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.575 17:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:08.575 17:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:08.851 true 00:06:08.851 17:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:08.851 17:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.128 17:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.399 17:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:09.399 17:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:09.399 true 00:06:09.399 17:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:09.399 17:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.657 17:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.915 17:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:09.915 17:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:10.173 true 00:06:10.173 17:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:10.173 17:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.431 17:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.689 17:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:10.689 17:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:10.689 true 00:06:10.689 17:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:10.689 17:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.948 17:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.206 17:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:11.206 17:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:11.464 true 00:06:11.464 17:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:11.464 17:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.722 17:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.981 17:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:11.981 17:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:11.981 true 00:06:11.981 17:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:11.981 17:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.239 17:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.497 17:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:12.497 17:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:12.755 true 00:06:12.755 17:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:12.755 17:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.013 17:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.271 17:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:13.271 17:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:13.271 true 00:06:13.271 17:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:13.271 17:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.530 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.788 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:13.788 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:14.046 true 00:06:14.046 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:14.046 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.308 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.308 Initializing NVMe Controllers 00:06:14.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:14.308 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:06:14.308 Controller IO queue size 128, less than required. 00:06:14.308 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:14.308 WARNING: Some requested NVMe devices were skipped 00:06:14.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:14.308 Initialization complete. Launching workers. 00:06:14.308 ======================================================== 00:06:14.308 Latency(us) 00:06:14.308 Device Information : IOPS MiB/s Average min max 00:06:14.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27400.16 13.38 4671.42 1515.94 8587.68 00:06:14.308 ======================================================== 00:06:14.308 Total : 27400.16 13.38 4671.42 1515.94 8587.68 00:06:14.308 00:06:14.565 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:14.565 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:14.565 true 00:06:14.565 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1730147 00:06:14.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1730147) - No such process 00:06:14.565 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1730147 00:06:14.565 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.823 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.081 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:15.081 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:15.081 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:15.081 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.081 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:15.081 null0 00:06:15.340 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:15.340 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.340 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:15.340 null1 00:06:15.340 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:15.340 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.340 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:15.597 null2 00:06:15.597 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:15.597 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.597 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:15.854 null3 00:06:15.854 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:15.854 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.854 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:16.112 null4 00:06:16.112 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.112 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.112 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:16.112 null5 00:06:16.112 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.112 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.112 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:16.370 null6 00:06:16.370 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.370 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.370 17:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:16.629 null7 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:16.629 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:16.630 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:16.630 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:16.630 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:16.630 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.630 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1735669 1735670 1735672 1735674 1735676 1735678 1735680 1735682 00:06:16.630 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:16.630 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.630 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:16.630 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:16.630 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.630 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.888 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.888 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.888 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.888 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.888 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.888 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.888 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.888 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.146 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.147 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.405 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.663 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.663 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.663 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.663 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.663 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.663 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.663 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.663 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.922 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.181 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.439 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.440 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.440 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.440 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.440 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.440 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:18.440 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.440 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.440 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.440 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.440 17:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.698 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.956 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.956 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.956 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.956 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.956 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.956 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.956 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.956 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.214 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.473 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.731 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.731 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.731 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.731 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.731 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.731 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.731 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.731 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.989 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.247 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.247 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.247 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.247 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.247 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.247 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.247 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.247 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.247 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.247 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.247 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.505 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.505 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.505 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.505 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.505 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.505 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.505 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.505 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:20.763 rmmod nvme_tcp 00:06:20.763 rmmod nvme_fabrics 00:06:20.763 rmmod nvme_keyring 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1729683 ']' 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1729683 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1729683 ']' 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1729683 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.763 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729683 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729683' 00:06:21.022 killing process with pid 1729683 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1729683 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1729683 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:21.022 17:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:23.557 00:06:23.557 real 0m47.399s 00:06:23.557 user 3m21.440s 00:06:23.557 sys 0m17.218s 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.557 ************************************ 00:06:23.557 END TEST nvmf_ns_hotplug_stress 00:06:23.557 ************************************ 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.557 ************************************ 00:06:23.557 START TEST nvmf_delete_subsystem 00:06:23.557 ************************************ 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:23.557 * Looking for test storage... 00:06:23.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.557 --rc genhtml_branch_coverage=1 00:06:23.557 --rc genhtml_function_coverage=1 00:06:23.557 --rc genhtml_legend=1 00:06:23.557 --rc geninfo_all_blocks=1 00:06:23.557 --rc geninfo_unexecuted_blocks=1 00:06:23.557 00:06:23.557 ' 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.557 --rc genhtml_branch_coverage=1 00:06:23.557 --rc genhtml_function_coverage=1 00:06:23.557 --rc genhtml_legend=1 00:06:23.557 --rc geninfo_all_blocks=1 00:06:23.557 --rc geninfo_unexecuted_blocks=1 00:06:23.557 00:06:23.557 ' 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.557 --rc genhtml_branch_coverage=1 00:06:23.557 --rc genhtml_function_coverage=1 00:06:23.557 --rc genhtml_legend=1 00:06:23.557 --rc geninfo_all_blocks=1 00:06:23.557 --rc geninfo_unexecuted_blocks=1 00:06:23.557 00:06:23.557 ' 00:06:23.557 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.557 --rc genhtml_branch_coverage=1 00:06:23.557 --rc genhtml_function_coverage=1 00:06:23.557 --rc genhtml_legend=1 00:06:23.558 --rc geninfo_all_blocks=1 00:06:23.558 --rc geninfo_unexecuted_blocks=1 00:06:23.558 00:06:23.558 ' 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:23.558 17:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:30.125 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:30.125 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:30.126 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:30.126 Found net devices under 0000:af:00.0: cvl_0_0 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:30.126 Found net devices under 0000:af:00.1: cvl_0_1 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:30.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:06:30.126 00:06:30.126 --- 10.0.0.2 ping statistics --- 00:06:30.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.126 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:30.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:06:30.126 00:06:30.126 --- 10.0.0.1 ping statistics --- 00:06:30.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.126 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1739992 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1739992 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1739992 ']' 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.126 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.126 [2024-12-09 17:16:55.826406] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:06:30.126 [2024-12-09 17:16:55.826454] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.126 [2024-12-09 17:16:55.904916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.126 [2024-12-09 17:16:55.947217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.126 [2024-12-09 17:16:55.947249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.126 [2024-12-09 17:16:55.947256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:30.126 [2024-12-09 17:16:55.947263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:30.126 [2024-12-09 17:16:55.947268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.126 [2024-12-09 17:16:55.948369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.127 [2024-12-09 17:16:55.948372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.127 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.127 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:30.127 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:30.127 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.127 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.385 [2024-12-09 17:16:56.695026] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.385 [2024-12-09 17:16:56.715208] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.385 NULL1 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.385 Delay0 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1740230 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:30.385 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:30.385 [2024-12-09 17:16:56.826060] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:32.283 17:16:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:32.283 17:16:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.283 17:16:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 [2024-12-09 17:16:58.943369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c92c0 is same with the state(6) to be set 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.541 starting I/O failed: -6 00:06:32.541 Write completed with error (sct=0, sc=8) 00:06:32.541 Read completed with error (sct=0, sc=8) 00:06:32.542 Read completed with error (sct=0, sc=8) 00:06:32.542 Read completed with error (sct=0, sc=8) 00:06:32.542 starting I/O failed: -6 00:06:32.542 Read completed with error (sct=0, sc=8) 00:06:32.542 Write completed with error (sct=0, sc=8) 00:06:32.542 Read completed with error (sct=0, sc=8) 00:06:32.542 Write completed with error (sct=0, sc=8) 00:06:32.542 starting I/O failed: -6 00:06:32.542 Read completed with error (sct=0, sc=8) 00:06:32.542 Read completed with error (sct=0, sc=8) 00:06:32.542 Write completed with error (sct=0, sc=8) 00:06:32.542 Read completed with error (sct=0, sc=8) 00:06:32.542 starting I/O failed: -6 00:06:32.542 Read completed with error (sct=0, sc=8) 00:06:32.542 Read completed with error (sct=0, sc=8) 00:06:32.542 [2024-12-09 17:16:58.944579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc4fc00d490 is same with the state(6) to be set 00:06:33.475 [2024-12-09 17:16:59.920047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca9b0 is same with the state(6) to be set 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 [2024-12-09 17:16:59.946562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc4fc00d7c0 is same with the state(6) to be set 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 [2024-12-09 17:16:59.946727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc4fc00d020 is same with the state(6) to be set 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Write completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.475 Read completed with error (sct=0, sc=8) 00:06:33.476 Write completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Write completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 [2024-12-09 17:16:59.946920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc4fc000c40 is same with the state(6) to be set 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Write completed with error (sct=0, sc=8) 00:06:33.476 Write completed with error (sct=0, sc=8) 00:06:33.476 Write completed with error (sct=0, sc=8) 00:06:33.476 Write completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Read completed with error (sct=0, sc=8) 00:06:33.476 Write completed with error (sct=0, sc=8) 00:06:33.476 Write completed with error (sct=0, sc=8) 00:06:33.476 [2024-12-09 17:16:59.947775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c9960 is same with the state(6) to be set 00:06:33.476 Initializing NVMe Controllers 00:06:33.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:33.476 Controller IO queue size 128, less than required. 00:06:33.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:33.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:33.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:33.476 Initialization complete. Launching workers. 00:06:33.476 ======================================================== 00:06:33.476 Latency(us) 00:06:33.476 Device Information : IOPS MiB/s Average min max 00:06:33.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.68 0.08 878598.04 263.05 1008165.27 00:06:33.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.66 0.08 1191080.58 1046.13 2002461.02 00:06:33.476 ======================================================== 00:06:33.476 Total : 311.34 0.15 1035837.66 263.05 2002461.02 00:06:33.476 00:06:33.476 [2024-12-09 17:16:59.948178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ca9b0 (9): Bad file descriptor 00:06:33.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:33.476 17:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.476 17:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:33.476 17:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1740230 00:06:33.476 17:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1740230 00:06:34.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1740230) - No such process 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1740230 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1740230 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1740230 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.041 [2024-12-09 17:17:00.479641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1740902 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1740902 00:06:34.041 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.041 [2024-12-09 17:17:00.568337] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:34.606 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.606 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1740902 00:06:34.606 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.172 17:17:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.172 17:17:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1740902 00:06:35.172 17:17:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.736 17:17:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.736 17:17:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1740902 00:06:35.736 17:17:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.993 17:17:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.993 17:17:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1740902 00:06:35.993 17:17:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:36.558 17:17:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:36.558 17:17:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1740902 00:06:36.558 17:17:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.123 17:17:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:37.123 17:17:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1740902 00:06:37.123 17:17:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.381 Initializing NVMe Controllers 00:06:37.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:37.381 Controller IO queue size 128, less than required. 00:06:37.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:37.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:37.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:37.381 Initialization complete. Launching workers. 00:06:37.381 ======================================================== 00:06:37.381 Latency(us) 00:06:37.381 Device Information : IOPS MiB/s Average min max 00:06:37.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002568.32 1000127.37 1009327.77 00:06:37.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003595.49 1000170.49 1010402.24 00:06:37.381 ======================================================== 00:06:37.381 Total : 256.00 0.12 1003081.91 1000127.37 1010402.24 00:06:37.381 00:06:37.639 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:37.639 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1740902 00:06:37.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1740902) - No such process 00:06:37.639 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1740902 00:06:37.639 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:37.639 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:37.640 rmmod nvme_tcp 00:06:37.640 rmmod nvme_fabrics 00:06:37.640 rmmod nvme_keyring 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1739992 ']' 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1739992 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1739992 ']' 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1739992 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1739992 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1739992' 00:06:37.640 killing process with pid 1739992 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1739992 00:06:37.640 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1739992 00:06:37.898 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:37.898 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:37.898 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:37.898 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:37.898 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:37.898 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:37.898 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:37.898 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:37.898 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:37.898 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.898 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.898 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:40.434 00:06:40.434 real 0m16.728s 00:06:40.434 user 0m30.613s 00:06:40.434 sys 0m5.408s 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.434 ************************************ 00:06:40.434 END TEST nvmf_delete_subsystem 00:06:40.434 ************************************ 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.434 ************************************ 00:06:40.434 START TEST nvmf_host_management 00:06:40.434 ************************************ 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:40.434 * Looking for test storage... 00:06:40.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:40.434 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.435 --rc genhtml_branch_coverage=1 00:06:40.435 --rc genhtml_function_coverage=1 00:06:40.435 --rc genhtml_legend=1 00:06:40.435 --rc geninfo_all_blocks=1 00:06:40.435 --rc geninfo_unexecuted_blocks=1 00:06:40.435 00:06:40.435 ' 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.435 --rc genhtml_branch_coverage=1 00:06:40.435 --rc genhtml_function_coverage=1 00:06:40.435 --rc genhtml_legend=1 00:06:40.435 --rc geninfo_all_blocks=1 00:06:40.435 --rc geninfo_unexecuted_blocks=1 00:06:40.435 00:06:40.435 ' 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.435 --rc genhtml_branch_coverage=1 00:06:40.435 --rc genhtml_function_coverage=1 00:06:40.435 --rc genhtml_legend=1 00:06:40.435 --rc geninfo_all_blocks=1 00:06:40.435 --rc geninfo_unexecuted_blocks=1 00:06:40.435 00:06:40.435 ' 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.435 --rc genhtml_branch_coverage=1 00:06:40.435 --rc genhtml_function_coverage=1 00:06:40.435 --rc genhtml_legend=1 00:06:40.435 --rc geninfo_all_blocks=1 00:06:40.435 --rc geninfo_unexecuted_blocks=1 00:06:40.435 00:06:40.435 ' 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.435 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:47.008 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:47.008 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:47.008 Found net devices under 0000:af:00.0: cvl_0_0 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:47.008 Found net devices under 0000:af:00.1: cvl_0_1 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:47.008 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:47.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:06:47.008 00:06:47.009 --- 10.0.0.2 ping statistics --- 00:06:47.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.009 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:47.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:06:47.009 00:06:47.009 --- 10.0.0.1 ping statistics --- 00:06:47.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.009 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1745028 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1745028 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1745028 ']' 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.009 [2024-12-09 17:17:12.665557] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:06:47.009 [2024-12-09 17:17:12.665608] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.009 [2024-12-09 17:17:12.742713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.009 [2024-12-09 17:17:12.784796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:47.009 [2024-12-09 17:17:12.784830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:47.009 [2024-12-09 17:17:12.784837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:47.009 [2024-12-09 17:17:12.784843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:47.009 [2024-12-09 17:17:12.784848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:47.009 [2024-12-09 17:17:12.786198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.009 [2024-12-09 17:17:12.786306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.009 [2024-12-09 17:17:12.786412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.009 [2024-12-09 17:17:12.786413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.009 [2024-12-09 17:17:12.923605] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.009 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.009 Malloc0 00:06:47.009 [2024-12-09 17:17:12.997245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1745114 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1745114 /var/tmp/bdevperf.sock 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1745114 ']' 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:47.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:47.009 { 00:06:47.009 "params": { 00:06:47.009 "name": "Nvme$subsystem", 00:06:47.009 "trtype": "$TEST_TRANSPORT", 00:06:47.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:47.009 "adrfam": "ipv4", 00:06:47.009 "trsvcid": "$NVMF_PORT", 00:06:47.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:47.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:47.009 "hdgst": ${hdgst:-false}, 00:06:47.009 "ddgst": ${ddgst:-false} 00:06:47.009 }, 00:06:47.009 "method": "bdev_nvme_attach_controller" 00:06:47.009 } 00:06:47.009 EOF 00:06:47.009 )") 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:47.009 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:47.009 "params": { 00:06:47.009 "name": "Nvme0", 00:06:47.009 "trtype": "tcp", 00:06:47.009 "traddr": "10.0.0.2", 00:06:47.009 "adrfam": "ipv4", 00:06:47.009 "trsvcid": "4420", 00:06:47.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:47.009 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:47.009 "hdgst": false, 00:06:47.009 "ddgst": false 00:06:47.009 }, 00:06:47.009 "method": "bdev_nvme_attach_controller" 00:06:47.009 }' 00:06:47.009 [2024-12-09 17:17:13.093551] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:06:47.009 [2024-12-09 17:17:13.093596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745114 ] 00:06:47.009 [2024-12-09 17:17:13.168352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.009 [2024-12-09 17:17:13.207672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.009 Running I/O for 10 seconds... 00:06:47.010 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.010 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:47.010 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:47.010 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.010 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=99 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 99 -ge 100 ']' 00:06:47.268 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.527 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.527 [2024-12-09 17:17:13.893366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.527 [2024-12-09 17:17:13.893535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea160 is same with the state(6) to be set 00:06:47.528 [2024-12-09 17:17:13.893877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.893913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.893929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.893936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.893945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.893953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.893962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.893968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.893976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.893983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.893991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.893997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.528 [2024-12-09 17:17:13.894222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.528 [2024-12-09 17:17:13.894232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.529 [2024-12-09 17:17:13.894812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.529 [2024-12-09 17:17:13.894818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.530 [2024-12-09 17:17:13.894826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.530 [2024-12-09 17:17:13.894835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.530 [2024-12-09 17:17:13.894843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.530 [2024-12-09 17:17:13.894849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.530 [2024-12-09 17:17:13.894857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.530 [2024-12-09 17:17:13.894863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.530 [2024-12-09 17:17:13.894872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.530 [2024-12-09 17:17:13.894879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.530 [2024-12-09 17:17:13.894886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d8720 is same with the state(6) to be set 00:06:47.530 [2024-12-09 17:17:13.895851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:47.530 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.530 task offset: 98304 on job bdev=Nvme0n1 fails 00:06:47.530 00:06:47.530 Latency(us) 00:06:47.530 [2024-12-09T16:17:14.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:47.530 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:47.530 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:47.530 Verification LBA range: start 0x0 length 0x400 00:06:47.530 Nvme0n1 : 0.40 1927.24 120.45 160.60 0.00 29829.04 3588.88 27088.21 00:06:47.530 [2024-12-09T16:17:14.070Z] =================================================================================================================== 00:06:47.530 [2024-12-09T16:17:14.070Z] Total : 1927.24 120.45 160.60 0.00 29829.04 3588.88 27088.21 00:06:47.530 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:47.530 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.530 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.530 [2024-12-09 17:17:13.898228] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.530 [2024-12-09 17:17:13.898249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbf760 (9): Bad file descriptor 00:06:47.530 [2024-12-09 17:17:13.900012] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:47.530 [2024-12-09 17:17:13.900083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:47.530 [2024-12-09 17:17:13.900104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.530 [2024-12-09 17:17:13.900116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:47.530 [2024-12-09 17:17:13.900124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:47.530 [2024-12-09 17:17:13.900130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:47.530 [2024-12-09 17:17:13.900137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1fbf760 00:06:47.530 [2024-12-09 17:17:13.900157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbf760 (9): Bad file descriptor 00:06:47.530 [2024-12-09 17:17:13.900174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:47.530 [2024-12-09 17:17:13.900181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:47.530 [2024-12-09 17:17:13.900190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:47.530 [2024-12-09 17:17:13.900198] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:47.530 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.530 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:48.463 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1745114 00:06:48.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1745114) - No such process 00:06:48.463 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:48.463 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:48.463 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:48.463 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:48.463 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:48.463 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:48.463 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:48.464 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:48.464 { 00:06:48.464 "params": { 00:06:48.464 "name": "Nvme$subsystem", 00:06:48.464 "trtype": "$TEST_TRANSPORT", 00:06:48.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:48.464 "adrfam": "ipv4", 00:06:48.464 "trsvcid": "$NVMF_PORT", 00:06:48.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:48.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:48.464 "hdgst": ${hdgst:-false}, 00:06:48.464 "ddgst": ${ddgst:-false} 00:06:48.464 }, 00:06:48.464 "method": "bdev_nvme_attach_controller" 00:06:48.464 } 00:06:48.464 EOF 00:06:48.464 )") 00:06:48.464 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:48.464 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:48.464 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:48.464 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:48.464 "params": { 00:06:48.464 "name": "Nvme0", 00:06:48.464 "trtype": "tcp", 00:06:48.464 "traddr": "10.0.0.2", 00:06:48.464 "adrfam": "ipv4", 00:06:48.464 "trsvcid": "4420", 00:06:48.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:48.464 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:48.464 "hdgst": false, 00:06:48.464 "ddgst": false 00:06:48.464 }, 00:06:48.464 "method": "bdev_nvme_attach_controller" 00:06:48.464 }' 00:06:48.464 [2024-12-09 17:17:14.960363] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:06:48.464 [2024-12-09 17:17:14.960409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745368 ] 00:06:48.722 [2024-12-09 17:17:15.032710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.722 [2024-12-09 17:17:15.071462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.722 Running I/O for 1 seconds... 00:06:50.093 1984.00 IOPS, 124.00 MiB/s 00:06:50.093 Latency(us) 00:06:50.093 [2024-12-09T16:17:16.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.093 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:50.093 Verification LBA range: start 0x0 length 0x400 00:06:50.093 Nvme0n1 : 1.01 2033.73 127.11 0.00 0.00 30977.78 4493.90 27712.37 00:06:50.093 [2024-12-09T16:17:16.633Z] =================================================================================================================== 00:06:50.093 [2024-12-09T16:17:16.633Z] Total : 2033.73 127.11 0.00 0.00 30977.78 4493.90 27712.37 00:06:50.093 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:50.093 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:50.093 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:50.094 rmmod nvme_tcp 00:06:50.094 rmmod nvme_fabrics 00:06:50.094 rmmod nvme_keyring 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1745028 ']' 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1745028 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1745028 ']' 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1745028 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1745028 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1745028' 00:06:50.094 killing process with pid 1745028 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1745028 00:06:50.094 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1745028 00:06:50.353 [2024-12-09 17:17:16.690031] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:50.353 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:50.353 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:50.353 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:50.353 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:50.353 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:50.353 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:50.353 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:50.353 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:50.353 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:50.353 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.353 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.353 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.257 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:52.257 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:52.257 00:06:52.257 real 0m12.341s 00:06:52.257 user 0m19.598s 00:06:52.257 sys 0m5.535s 00:06:52.257 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.257 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.257 ************************************ 00:06:52.257 END TEST nvmf_host_management 00:06:52.257 ************************************ 00:06:52.516 17:17:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:52.516 17:17:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.516 17:17:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.516 17:17:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:52.516 ************************************ 00:06:52.516 START TEST nvmf_lvol 00:06:52.516 ************************************ 00:06:52.516 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:52.516 * Looking for test storage... 00:06:52.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:52.516 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:52.516 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:52.516 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:52.516 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:52.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.517 --rc genhtml_branch_coverage=1 00:06:52.517 --rc genhtml_function_coverage=1 00:06:52.517 --rc genhtml_legend=1 00:06:52.517 --rc geninfo_all_blocks=1 00:06:52.517 --rc geninfo_unexecuted_blocks=1 00:06:52.517 00:06:52.517 ' 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:52.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.517 --rc genhtml_branch_coverage=1 00:06:52.517 --rc genhtml_function_coverage=1 00:06:52.517 --rc genhtml_legend=1 00:06:52.517 --rc geninfo_all_blocks=1 00:06:52.517 --rc geninfo_unexecuted_blocks=1 00:06:52.517 00:06:52.517 ' 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:52.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.517 --rc genhtml_branch_coverage=1 00:06:52.517 --rc genhtml_function_coverage=1 00:06:52.517 --rc genhtml_legend=1 00:06:52.517 --rc geninfo_all_blocks=1 00:06:52.517 --rc geninfo_unexecuted_blocks=1 00:06:52.517 00:06:52.517 ' 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:52.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.517 --rc genhtml_branch_coverage=1 00:06:52.517 --rc genhtml_function_coverage=1 00:06:52.517 --rc genhtml_legend=1 00:06:52.517 --rc geninfo_all_blocks=1 00:06:52.517 --rc geninfo_unexecuted_blocks=1 00:06:52.517 00:06:52.517 ' 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.517 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.776 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:52.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:52.777 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:59.348 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:59.349 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:59.349 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:59.349 Found net devices under 0000:af:00.0: cvl_0_0 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:59.349 Found net devices under 0000:af:00.1: cvl_0_1 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:59.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:59.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:06:59.349 00:06:59.349 --- 10.0.0.2 ping statistics --- 00:06:59.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.349 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:59.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:59.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:06:59.349 00:06:59.349 --- 10.0.0.1 ping statistics --- 00:06:59.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.349 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:59.349 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1749290 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1749290 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1749290 ']' 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.349 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:59.349 [2024-12-09 17:17:25.068369] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:06:59.349 [2024-12-09 17:17:25.068415] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.349 [2024-12-09 17:17:25.147215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.349 [2024-12-09 17:17:25.187471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:59.349 [2024-12-09 17:17:25.187506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:59.349 [2024-12-09 17:17:25.187513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.349 [2024-12-09 17:17:25.187519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.350 [2024-12-09 17:17:25.187524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:59.350 [2024-12-09 17:17:25.188753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.350 [2024-12-09 17:17:25.188863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.350 [2024-12-09 17:17:25.188865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.350 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.350 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:59.350 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:59.350 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:59.350 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:59.350 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:59.350 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:59.350 [2024-12-09 17:17:25.486502] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.350 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:59.350 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:59.350 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:59.607 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:59.607 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:59.865 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:59.865 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4d32587a-28f4-484b-9d6e-f6a78c7c5cd7 00:06:59.865 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4d32587a-28f4-484b-9d6e-f6a78c7c5cd7 lvol 20 00:07:00.123 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3d643a75-d579-4fb0-bbb0-cfef2fd5ab87 00:07:00.123 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:00.401 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3d643a75-d579-4fb0-bbb0-cfef2fd5ab87 00:07:00.699 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:00.699 [2024-12-09 17:17:27.126463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.699 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:00.982 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1749568 00:07:00.982 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:00.982 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:01.925 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3d643a75-d579-4fb0-bbb0-cfef2fd5ab87 MY_SNAPSHOT 00:07:02.182 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a7c8dc52-edfa-40ea-b7a7-13679bbcae99 00:07:02.183 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3d643a75-d579-4fb0-bbb0-cfef2fd5ab87 30 00:07:02.441 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a7c8dc52-edfa-40ea-b7a7-13679bbcae99 MY_CLONE 00:07:02.698 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ceeaaeca-41de-4efb-814b-a23706a81eda 00:07:02.698 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ceeaaeca-41de-4efb-814b-a23706a81eda 00:07:03.264 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1749568 00:07:11.370 Initializing NVMe Controllers 00:07:11.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:11.370 Controller IO queue size 128, less than required. 00:07:11.370 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:11.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:11.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:11.370 Initialization complete. Launching workers. 00:07:11.370 ======================================================== 00:07:11.370 Latency(us) 00:07:11.370 Device Information : IOPS MiB/s Average min max 00:07:11.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11887.20 46.43 10772.25 1575.28 50025.19 00:07:11.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11998.10 46.87 10671.93 1283.00 110684.67 00:07:11.370 ======================================================== 00:07:11.370 Total : 23885.30 93.30 10721.86 1283.00 110684.67 00:07:11.370 00:07:11.370 17:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:11.628 17:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3d643a75-d579-4fb0-bbb0-cfef2fd5ab87 00:07:11.628 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d32587a-28f4-484b-9d6e-f6a78c7c5cd7 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:11.886 rmmod nvme_tcp 00:07:11.886 rmmod nvme_fabrics 00:07:11.886 rmmod nvme_keyring 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1749290 ']' 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1749290 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1749290 ']' 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1749290 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.886 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749290 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749290' 00:07:12.144 killing process with pid 1749290 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1749290 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1749290 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.144 17:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:14.679 00:07:14.679 real 0m21.883s 00:07:14.679 user 1m3.010s 00:07:14.679 sys 0m7.650s 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.679 ************************************ 00:07:14.679 END TEST nvmf_lvol 00:07:14.679 ************************************ 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:14.679 ************************************ 00:07:14.679 START TEST nvmf_lvs_grow 00:07:14.679 ************************************ 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:14.679 * Looking for test storage... 00:07:14.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:14.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.679 --rc genhtml_branch_coverage=1 00:07:14.679 --rc genhtml_function_coverage=1 00:07:14.679 --rc genhtml_legend=1 00:07:14.679 --rc geninfo_all_blocks=1 00:07:14.679 --rc geninfo_unexecuted_blocks=1 00:07:14.679 00:07:14.679 ' 00:07:14.679 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:14.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.679 --rc genhtml_branch_coverage=1 00:07:14.679 --rc genhtml_function_coverage=1 00:07:14.679 --rc genhtml_legend=1 00:07:14.679 --rc geninfo_all_blocks=1 00:07:14.680 --rc geninfo_unexecuted_blocks=1 00:07:14.680 00:07:14.680 ' 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:14.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.680 --rc genhtml_branch_coverage=1 00:07:14.680 --rc genhtml_function_coverage=1 00:07:14.680 --rc genhtml_legend=1 00:07:14.680 --rc geninfo_all_blocks=1 00:07:14.680 --rc geninfo_unexecuted_blocks=1 00:07:14.680 00:07:14.680 ' 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:14.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.680 --rc genhtml_branch_coverage=1 00:07:14.680 --rc genhtml_function_coverage=1 00:07:14.680 --rc genhtml_legend=1 00:07:14.680 --rc geninfo_all_blocks=1 00:07:14.680 --rc geninfo_unexecuted_blocks=1 00:07:14.680 00:07:14.680 ' 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.680 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:14.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:14.680 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.249 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:21.250 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:21.250 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:21.250 Found net devices under 0000:af:00.0: cvl_0_0 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:21.250 Found net devices under 0000:af:00.1: cvl_0_1 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:21.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:07:21.250 00:07:21.250 --- 10.0.0.2 ping statistics --- 00:07:21.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.250 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:07:21.250 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:07:21.250 00:07:21.250 --- 10.0.0.1 ping statistics --- 00:07:21.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.250 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1755061 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1755061 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1755061 ']' 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.250 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.250 [2024-12-09 17:17:47.104058] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:07:21.250 [2024-12-09 17:17:47.104099] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.250 [2024-12-09 17:17:47.163393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.250 [2024-12-09 17:17:47.203312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.250 [2024-12-09 17:17:47.203346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.250 [2024-12-09 17:17:47.203354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.250 [2024-12-09 17:17:47.203361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.250 [2024-12-09 17:17:47.203366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.251 [2024-12-09 17:17:47.203847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:21.251 [2024-12-09 17:17:47.495119] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.251 ************************************ 00:07:21.251 START TEST lvs_grow_clean 00:07:21.251 ************************************ 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.251 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:21.509 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:21.509 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f09fff0b-3ee8-4c86-8023-8925932061c5 00:07:21.509 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f09fff0b-3ee8-4c86-8023-8925932061c5 00:07:21.509 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:21.768 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:21.768 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:21.768 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f09fff0b-3ee8-4c86-8023-8925932061c5 lvol 150 00:07:22.027 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d0ebbcba-b123-42d6-8a13-bd47449c8718 00:07:22.027 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.027 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:22.027 [2024-12-09 17:17:48.535093] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:22.027 [2024-12-09 17:17:48.535143] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:22.027 true 00:07:22.027 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f09fff0b-3ee8-4c86-8023-8925932061c5 00:07:22.027 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:22.285 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:22.285 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:22.543 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d0ebbcba-b123-42d6-8a13-bd47449c8718 00:07:22.802 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:22.802 [2024-12-09 17:17:49.273298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.802 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:23.060 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:23.060 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1755507 00:07:23.060 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.060 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1755507 /var/tmp/bdevperf.sock 00:07:23.060 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1755507 ']' 00:07:23.060 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.060 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.060 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.060 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.060 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:23.060 [2024-12-09 17:17:49.517734] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:07:23.060 [2024-12-09 17:17:49.517780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755507 ] 00:07:23.060 [2024-12-09 17:17:49.593262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.318 [2024-12-09 17:17:49.634126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.318 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.318 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:23.318 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:23.576 Nvme0n1 00:07:23.576 17:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:23.833 [ 00:07:23.833 { 00:07:23.833 "name": "Nvme0n1", 00:07:23.833 "aliases": [ 00:07:23.833 "d0ebbcba-b123-42d6-8a13-bd47449c8718" 00:07:23.833 ], 00:07:23.833 "product_name": "NVMe disk", 00:07:23.833 "block_size": 4096, 00:07:23.833 "num_blocks": 38912, 00:07:23.833 "uuid": "d0ebbcba-b123-42d6-8a13-bd47449c8718", 00:07:23.833 "numa_id": 1, 00:07:23.833 "assigned_rate_limits": { 00:07:23.833 "rw_ios_per_sec": 0, 00:07:23.833 "rw_mbytes_per_sec": 0, 00:07:23.833 "r_mbytes_per_sec": 0, 00:07:23.833 "w_mbytes_per_sec": 0 00:07:23.833 }, 00:07:23.833 "claimed": false, 00:07:23.833 "zoned": false, 00:07:23.833 "supported_io_types": { 00:07:23.833 "read": true, 00:07:23.833 "write": true, 00:07:23.833 "unmap": true, 00:07:23.833 "flush": true, 00:07:23.833 "reset": true, 00:07:23.833 "nvme_admin": true, 00:07:23.833 "nvme_io": true, 00:07:23.833 "nvme_io_md": false, 00:07:23.833 "write_zeroes": true, 00:07:23.833 "zcopy": false, 00:07:23.833 "get_zone_info": false, 00:07:23.833 "zone_management": false, 00:07:23.833 "zone_append": false, 00:07:23.833 "compare": true, 00:07:23.833 "compare_and_write": true, 00:07:23.833 "abort": true, 00:07:23.833 "seek_hole": false, 00:07:23.833 "seek_data": false, 00:07:23.833 "copy": true, 00:07:23.833 "nvme_iov_md": false 00:07:23.833 }, 00:07:23.833 "memory_domains": [ 00:07:23.833 { 00:07:23.833 "dma_device_id": "system", 00:07:23.833 "dma_device_type": 1 00:07:23.833 } 00:07:23.833 ], 00:07:23.833 "driver_specific": { 00:07:23.833 "nvme": [ 00:07:23.833 { 00:07:23.833 "trid": { 00:07:23.833 "trtype": "TCP", 00:07:23.833 "adrfam": "IPv4", 00:07:23.833 "traddr": "10.0.0.2", 00:07:23.833 "trsvcid": "4420", 00:07:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:23.833 }, 00:07:23.833 "ctrlr_data": { 00:07:23.833 "cntlid": 1, 00:07:23.833 "vendor_id": "0x8086", 00:07:23.833 "model_number": "SPDK bdev Controller", 00:07:23.833 "serial_number": "SPDK0", 00:07:23.833 "firmware_revision": "25.01", 00:07:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:23.833 "oacs": { 00:07:23.833 "security": 0, 00:07:23.833 "format": 0, 00:07:23.833 "firmware": 0, 00:07:23.833 "ns_manage": 0 00:07:23.833 }, 00:07:23.834 "multi_ctrlr": true, 00:07:23.834 "ana_reporting": false 00:07:23.834 }, 00:07:23.834 "vs": { 00:07:23.834 "nvme_version": "1.3" 00:07:23.834 }, 00:07:23.834 "ns_data": { 00:07:23.834 "id": 1, 00:07:23.834 "can_share": true 00:07:23.834 } 00:07:23.834 } 00:07:23.834 ], 00:07:23.834 "mp_policy": "active_passive" 00:07:23.834 } 00:07:23.834 } 00:07:23.834 ] 00:07:23.834 17:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1755561 00:07:23.834 17:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:23.834 17:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:24.091 Running I/O for 10 seconds... 00:07:25.024 Latency(us) 00:07:25.024 [2024-12-09T16:17:51.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.024 Nvme0n1 : 1.00 23476.00 91.70 0.00 0.00 0.00 0.00 0.00 00:07:25.024 [2024-12-09T16:17:51.564Z] =================================================================================================================== 00:07:25.024 [2024-12-09T16:17:51.564Z] Total : 23476.00 91.70 0.00 0.00 0.00 0.00 0.00 00:07:25.024 00:07:25.957 17:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f09fff0b-3ee8-4c86-8023-8925932061c5 00:07:25.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.957 Nvme0n1 : 2.00 23711.00 92.62 0.00 0.00 0.00 0.00 0.00 00:07:25.957 [2024-12-09T16:17:52.497Z] =================================================================================================================== 00:07:25.957 [2024-12-09T16:17:52.497Z] Total : 23711.00 92.62 0.00 0.00 0.00 0.00 0.00 00:07:25.957 00:07:25.957 true 00:07:26.215 17:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f09fff0b-3ee8-4c86-8023-8925932061c5 00:07:26.215 17:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:26.215 17:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:26.215 17:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:26.215 17:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1755561 00:07:27.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.149 Nvme0n1 : 3.00 23810.33 93.01 0.00 0.00 0.00 0.00 0.00 00:07:27.149 [2024-12-09T16:17:53.689Z] =================================================================================================================== 00:07:27.149 [2024-12-09T16:17:53.689Z] Total : 23810.33 93.01 0.00 0.00 0.00 0.00 0.00 00:07:27.149 00:07:28.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.082 Nvme0n1 : 4.00 23906.25 93.38 0.00 0.00 0.00 0.00 0.00 00:07:28.082 [2024-12-09T16:17:54.622Z] =================================================================================================================== 00:07:28.082 [2024-12-09T16:17:54.622Z] Total : 23906.25 93.38 0.00 0.00 0.00 0.00 0.00 00:07:28.082 00:07:29.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.023 Nvme0n1 : 5.00 23967.20 93.62 0.00 0.00 0.00 0.00 0.00 00:07:29.023 [2024-12-09T16:17:55.563Z] =================================================================================================================== 00:07:29.023 [2024-12-09T16:17:55.563Z] Total : 23967.20 93.62 0.00 0.00 0.00 0.00 0.00 00:07:29.023 00:07:29.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.956 Nvme0n1 : 6.00 24016.50 93.81 0.00 0.00 0.00 0.00 0.00 00:07:29.956 [2024-12-09T16:17:56.496Z] =================================================================================================================== 00:07:29.956 [2024-12-09T16:17:56.496Z] Total : 24016.50 93.81 0.00 0.00 0.00 0.00 0.00 00:07:29.956 00:07:30.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.889 Nvme0n1 : 7.00 24052.86 93.96 0.00 0.00 0.00 0.00 0.00 00:07:30.889 [2024-12-09T16:17:57.429Z] =================================================================================================================== 00:07:30.889 [2024-12-09T16:17:57.429Z] Total : 24052.86 93.96 0.00 0.00 0.00 0.00 0.00 00:07:30.889 00:07:32.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.261 Nvme0n1 : 8.00 24083.62 94.08 0.00 0.00 0.00 0.00 0.00 00:07:32.261 [2024-12-09T16:17:58.802Z] =================================================================================================================== 00:07:32.262 [2024-12-09T16:17:58.802Z] Total : 24083.62 94.08 0.00 0.00 0.00 0.00 0.00 00:07:32.262 00:07:33.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.195 Nvme0n1 : 9.00 24103.44 94.15 0.00 0.00 0.00 0.00 0.00 00:07:33.195 [2024-12-09T16:17:59.735Z] =================================================================================================================== 00:07:33.195 [2024-12-09T16:17:59.735Z] Total : 24103.44 94.15 0.00 0.00 0.00 0.00 0.00 00:07:33.195 00:07:34.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.204 Nvme0n1 : 10.00 24081.30 94.07 0.00 0.00 0.00 0.00 0.00 00:07:34.204 [2024-12-09T16:18:00.744Z] =================================================================================================================== 00:07:34.204 [2024-12-09T16:18:00.744Z] Total : 24081.30 94.07 0.00 0.00 0.00 0.00 0.00 00:07:34.204 00:07:34.204 00:07:34.204 Latency(us) 00:07:34.204 [2024-12-09T16:18:00.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.204 Nvme0n1 : 10.00 24085.07 94.08 0.00 0.00 5311.56 3105.16 11234.74 00:07:34.204 [2024-12-09T16:18:00.744Z] =================================================================================================================== 00:07:34.204 [2024-12-09T16:18:00.744Z] Total : 24085.07 94.08 0.00 0.00 5311.56 3105.16 11234.74 00:07:34.204 { 00:07:34.204 "results": [ 00:07:34.204 { 00:07:34.204 "job": "Nvme0n1", 00:07:34.204 "core_mask": "0x2", 00:07:34.204 "workload": "randwrite", 00:07:34.204 "status": "finished", 00:07:34.204 "queue_depth": 128, 00:07:34.204 "io_size": 4096, 00:07:34.204 "runtime": 10.003749, 00:07:34.204 "iops": 24085.0705070669, 00:07:34.204 "mibps": 94.08230666823007, 00:07:34.204 "io_failed": 0, 00:07:34.204 "io_timeout": 0, 00:07:34.204 "avg_latency_us": 5311.564907449187, 00:07:34.204 "min_latency_us": 3105.158095238095, 00:07:34.204 "max_latency_us": 11234.742857142857 00:07:34.204 } 00:07:34.204 ], 00:07:34.204 "core_count": 1 00:07:34.204 } 00:07:34.204 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1755507 00:07:34.204 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1755507 ']' 00:07:34.204 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1755507 00:07:34.204 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:34.204 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.204 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1755507 00:07:34.204 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:34.204 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:34.204 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1755507' 00:07:34.204 killing process with pid 1755507 00:07:34.204 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1755507 00:07:34.204 Received shutdown signal, test time was about 10.000000 seconds 00:07:34.204 00:07:34.204 Latency(us) 00:07:34.204 [2024-12-09T16:18:00.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.204 [2024-12-09T16:18:00.744Z] =================================================================================================================== 00:07:34.204 [2024-12-09T16:18:00.744Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:34.204 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1755507 00:07:34.204 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.463 17:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:34.721 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f09fff0b-3ee8-4c86-8023-8925932061c5 00:07:34.721 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:34.979 [2024-12-09 17:18:01.457881] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f09fff0b-3ee8-4c86-8023-8925932061c5 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f09fff0b-3ee8-4c86-8023-8925932061c5 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:34.979 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f09fff0b-3ee8-4c86-8023-8925932061c5 00:07:35.237 request: 00:07:35.237 { 00:07:35.237 "uuid": "f09fff0b-3ee8-4c86-8023-8925932061c5", 00:07:35.237 "method": "bdev_lvol_get_lvstores", 00:07:35.237 "req_id": 1 00:07:35.237 } 00:07:35.237 Got JSON-RPC error response 00:07:35.237 response: 00:07:35.237 { 00:07:35.237 "code": -19, 00:07:35.237 "message": "No such device" 00:07:35.237 } 00:07:35.237 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:35.237 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.237 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.237 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.237 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:35.495 aio_bdev 00:07:35.495 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d0ebbcba-b123-42d6-8a13-bd47449c8718 00:07:35.495 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d0ebbcba-b123-42d6-8a13-bd47449c8718 00:07:35.495 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:35.495 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:35.495 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:35.495 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:35.495 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:35.753 17:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d0ebbcba-b123-42d6-8a13-bd47449c8718 -t 2000 00:07:35.753 [ 00:07:35.753 { 00:07:35.753 "name": "d0ebbcba-b123-42d6-8a13-bd47449c8718", 00:07:35.753 "aliases": [ 00:07:35.753 "lvs/lvol" 00:07:35.753 ], 00:07:35.753 "product_name": "Logical Volume", 00:07:35.753 "block_size": 4096, 00:07:35.753 "num_blocks": 38912, 00:07:35.753 "uuid": "d0ebbcba-b123-42d6-8a13-bd47449c8718", 00:07:35.753 "assigned_rate_limits": { 00:07:35.753 "rw_ios_per_sec": 0, 00:07:35.753 "rw_mbytes_per_sec": 0, 00:07:35.753 "r_mbytes_per_sec": 0, 00:07:35.753 "w_mbytes_per_sec": 0 00:07:35.753 }, 00:07:35.753 "claimed": false, 00:07:35.753 "zoned": false, 00:07:35.753 "supported_io_types": { 00:07:35.753 "read": true, 00:07:35.753 "write": true, 00:07:35.753 "unmap": true, 00:07:35.753 "flush": false, 00:07:35.753 "reset": true, 00:07:35.753 "nvme_admin": false, 00:07:35.753 "nvme_io": false, 00:07:35.753 "nvme_io_md": false, 00:07:35.753 "write_zeroes": true, 00:07:35.753 "zcopy": false, 00:07:35.753 "get_zone_info": false, 00:07:35.753 "zone_management": false, 00:07:35.753 "zone_append": false, 00:07:35.753 "compare": false, 00:07:35.753 "compare_and_write": false, 00:07:35.753 "abort": false, 00:07:35.753 "seek_hole": true, 00:07:35.753 "seek_data": true, 00:07:35.753 "copy": false, 00:07:35.753 "nvme_iov_md": false 00:07:35.753 }, 00:07:35.753 "driver_specific": { 00:07:35.753 "lvol": { 00:07:35.753 "lvol_store_uuid": "f09fff0b-3ee8-4c86-8023-8925932061c5", 00:07:35.753 "base_bdev": "aio_bdev", 00:07:35.753 "thin_provision": false, 00:07:35.753 "num_allocated_clusters": 38, 00:07:35.753 "snapshot": false, 00:07:35.753 "clone": false, 00:07:35.753 "esnap_clone": false 00:07:35.753 } 00:07:35.753 } 00:07:35.753 } 00:07:35.753 ] 00:07:35.753 17:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:35.753 17:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f09fff0b-3ee8-4c86-8023-8925932061c5 00:07:35.753 17:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:36.012 17:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:36.012 17:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f09fff0b-3ee8-4c86-8023-8925932061c5 00:07:36.012 17:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:36.270 17:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:36.270 17:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d0ebbcba-b123-42d6-8a13-bd47449c8718 00:07:36.270 17:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f09fff0b-3ee8-4c86-8023-8925932061c5 00:07:36.529 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:36.787 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:36.787 00:07:36.787 real 0m15.704s 00:07:36.787 user 0m15.207s 00:07:36.787 sys 0m1.502s 00:07:36.787 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.787 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:36.787 ************************************ 00:07:36.787 END TEST lvs_grow_clean 00:07:36.787 ************************************ 00:07:36.787 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:36.787 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:36.787 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.787 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:37.045 ************************************ 00:07:37.045 START TEST lvs_grow_dirty 00:07:37.045 ************************************ 00:07:37.045 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:37.045 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:37.045 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:37.045 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:37.045 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:37.045 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:37.045 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:37.045 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:37.045 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:37.045 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.045 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:37.045 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:37.303 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=daa0af78-ca49-4209-a715-fdae60aacecc 00:07:37.303 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:37.303 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:37.562 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:37.562 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:37.562 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u daa0af78-ca49-4209-a715-fdae60aacecc lvol 150 00:07:37.820 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=42958859-c99f-42ff-b670-f0b0fd467aa2 00:07:37.820 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:37.820 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:37.821 [2024-12-09 17:18:04.311081] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:37.821 [2024-12-09 17:18:04.311133] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:37.821 true 00:07:37.821 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:37.821 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:38.079 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:38.079 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:38.338 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 42958859-c99f-42ff-b670-f0b0fd467aa2 00:07:38.597 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:38.597 [2024-12-09 17:18:05.065303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.597 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.855 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1758217 00:07:38.855 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:38.855 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:38.855 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1758217 /var/tmp/bdevperf.sock 00:07:38.855 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1758217 ']' 00:07:38.855 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:38.855 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.855 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:38.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:38.855 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.855 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:38.855 [2024-12-09 17:18:05.309212] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:07:38.855 [2024-12-09 17:18:05.309262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1758217 ] 00:07:38.855 [2024-12-09 17:18:05.382530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.113 [2024-12-09 17:18:05.422112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.113 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.113 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:39.113 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:39.679 Nvme0n1 00:07:39.679 17:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:39.679 [ 00:07:39.679 { 00:07:39.679 "name": "Nvme0n1", 00:07:39.679 "aliases": [ 00:07:39.679 "42958859-c99f-42ff-b670-f0b0fd467aa2" 00:07:39.679 ], 00:07:39.679 "product_name": "NVMe disk", 00:07:39.679 "block_size": 4096, 00:07:39.679 "num_blocks": 38912, 00:07:39.679 "uuid": "42958859-c99f-42ff-b670-f0b0fd467aa2", 00:07:39.679 "numa_id": 1, 00:07:39.679 "assigned_rate_limits": { 00:07:39.679 "rw_ios_per_sec": 0, 00:07:39.679 "rw_mbytes_per_sec": 0, 00:07:39.679 "r_mbytes_per_sec": 0, 00:07:39.679 "w_mbytes_per_sec": 0 00:07:39.679 }, 00:07:39.679 "claimed": false, 00:07:39.679 "zoned": false, 00:07:39.679 "supported_io_types": { 00:07:39.679 "read": true, 00:07:39.679 "write": true, 00:07:39.679 "unmap": true, 00:07:39.679 "flush": true, 00:07:39.679 "reset": true, 00:07:39.679 "nvme_admin": true, 00:07:39.679 "nvme_io": true, 00:07:39.679 "nvme_io_md": false, 00:07:39.679 "write_zeroes": true, 00:07:39.679 "zcopy": false, 00:07:39.679 "get_zone_info": false, 00:07:39.679 "zone_management": false, 00:07:39.679 "zone_append": false, 00:07:39.679 "compare": true, 00:07:39.679 "compare_and_write": true, 00:07:39.679 "abort": true, 00:07:39.679 "seek_hole": false, 00:07:39.679 "seek_data": false, 00:07:39.679 "copy": true, 00:07:39.679 "nvme_iov_md": false 00:07:39.679 }, 00:07:39.679 "memory_domains": [ 00:07:39.679 { 00:07:39.679 "dma_device_id": "system", 00:07:39.679 "dma_device_type": 1 00:07:39.679 } 00:07:39.679 ], 00:07:39.679 "driver_specific": { 00:07:39.679 "nvme": [ 00:07:39.679 { 00:07:39.679 "trid": { 00:07:39.679 "trtype": "TCP", 00:07:39.679 "adrfam": "IPv4", 00:07:39.679 "traddr": "10.0.0.2", 00:07:39.679 "trsvcid": "4420", 00:07:39.679 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:39.679 }, 00:07:39.679 "ctrlr_data": { 00:07:39.679 "cntlid": 1, 00:07:39.679 "vendor_id": "0x8086", 00:07:39.679 "model_number": "SPDK bdev Controller", 00:07:39.679 "serial_number": "SPDK0", 00:07:39.679 "firmware_revision": "25.01", 00:07:39.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:39.679 "oacs": { 00:07:39.679 "security": 0, 00:07:39.679 "format": 0, 00:07:39.679 "firmware": 0, 00:07:39.679 "ns_manage": 0 00:07:39.679 }, 00:07:39.679 "multi_ctrlr": true, 00:07:39.679 "ana_reporting": false 00:07:39.679 }, 00:07:39.679 "vs": { 00:07:39.679 "nvme_version": "1.3" 00:07:39.679 }, 00:07:39.679 "ns_data": { 00:07:39.679 "id": 1, 00:07:39.679 "can_share": true 00:07:39.679 } 00:07:39.679 } 00:07:39.679 ], 00:07:39.679 "mp_policy": "active_passive" 00:07:39.679 } 00:07:39.679 } 00:07:39.679 ] 00:07:39.679 17:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1758570 00:07:39.679 17:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:39.679 17:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:39.679 Running I/O for 10 seconds... 00:07:41.052 Latency(us) 00:07:41.052 [2024-12-09T16:18:07.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.052 Nvme0n1 : 1.00 23625.00 92.29 0.00 0.00 0.00 0.00 0.00 00:07:41.052 [2024-12-09T16:18:07.592Z] =================================================================================================================== 00:07:41.052 [2024-12-09T16:18:07.592Z] Total : 23625.00 92.29 0.00 0.00 0.00 0.00 0.00 00:07:41.052 00:07:41.618 17:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:41.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.876 Nvme0n1 : 2.00 23784.00 92.91 0.00 0.00 0.00 0.00 0.00 00:07:41.876 [2024-12-09T16:18:08.416Z] =================================================================================================================== 00:07:41.876 [2024-12-09T16:18:08.416Z] Total : 23784.00 92.91 0.00 0.00 0.00 0.00 0.00 00:07:41.876 00:07:41.876 true 00:07:41.876 17:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:41.876 17:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:42.133 17:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:42.133 17:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:42.133 17:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1758570 00:07:42.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.699 Nvme0n1 : 3.00 23861.33 93.21 0.00 0.00 0.00 0.00 0.00 00:07:42.699 [2024-12-09T16:18:09.239Z] =================================================================================================================== 00:07:42.699 [2024-12-09T16:18:09.239Z] Total : 23861.33 93.21 0.00 0.00 0.00 0.00 0.00 00:07:42.699 00:07:44.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.072 Nvme0n1 : 4.00 23897.75 93.35 0.00 0.00 0.00 0.00 0.00 00:07:44.072 [2024-12-09T16:18:10.612Z] =================================================================================================================== 00:07:44.072 [2024-12-09T16:18:10.612Z] Total : 23897.75 93.35 0.00 0.00 0.00 0.00 0.00 00:07:44.072 00:07:45.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.006 Nvme0n1 : 5.00 23898.00 93.35 0.00 0.00 0.00 0.00 0.00 00:07:45.006 [2024-12-09T16:18:11.546Z] =================================================================================================================== 00:07:45.006 [2024-12-09T16:18:11.546Z] Total : 23898.00 93.35 0.00 0.00 0.00 0.00 0.00 00:07:45.006 00:07:45.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.938 Nvme0n1 : 6.00 23972.00 93.64 0.00 0.00 0.00 0.00 0.00 00:07:45.938 [2024-12-09T16:18:12.478Z] =================================================================================================================== 00:07:45.938 [2024-12-09T16:18:12.478Z] Total : 23972.00 93.64 0.00 0.00 0.00 0.00 0.00 00:07:45.938 00:07:46.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.872 Nvme0n1 : 7.00 24006.86 93.78 0.00 0.00 0.00 0.00 0.00 00:07:46.872 [2024-12-09T16:18:13.412Z] =================================================================================================================== 00:07:46.872 [2024-12-09T16:18:13.412Z] Total : 24006.86 93.78 0.00 0.00 0.00 0.00 0.00 00:07:46.872 00:07:47.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.806 Nvme0n1 : 8.00 24038.38 93.90 0.00 0.00 0.00 0.00 0.00 00:07:47.806 [2024-12-09T16:18:14.346Z] =================================================================================================================== 00:07:47.806 [2024-12-09T16:18:14.346Z] Total : 24038.38 93.90 0.00 0.00 0.00 0.00 0.00 00:07:47.806 00:07:48.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.738 Nvme0n1 : 9.00 24072.22 94.03 0.00 0.00 0.00 0.00 0.00 00:07:48.738 [2024-12-09T16:18:15.278Z] =================================================================================================================== 00:07:48.738 [2024-12-09T16:18:15.278Z] Total : 24072.22 94.03 0.00 0.00 0.00 0.00 0.00 00:07:48.738 00:07:50.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.111 Nvme0n1 : 10.00 24091.20 94.11 0.00 0.00 0.00 0.00 0.00 00:07:50.111 [2024-12-09T16:18:16.651Z] =================================================================================================================== 00:07:50.111 [2024-12-09T16:18:16.651Z] Total : 24091.20 94.11 0.00 0.00 0.00 0.00 0.00 00:07:50.111 00:07:50.111 00:07:50.111 Latency(us) 00:07:50.111 [2024-12-09T16:18:16.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.111 Nvme0n1 : 10.01 24091.65 94.11 0.00 0.00 5310.02 3120.76 12982.37 00:07:50.111 [2024-12-09T16:18:16.651Z] =================================================================================================================== 00:07:50.111 [2024-12-09T16:18:16.651Z] Total : 24091.65 94.11 0.00 0.00 5310.02 3120.76 12982.37 00:07:50.111 { 00:07:50.111 "results": [ 00:07:50.111 { 00:07:50.111 "job": "Nvme0n1", 00:07:50.111 "core_mask": "0x2", 00:07:50.111 "workload": "randwrite", 00:07:50.111 "status": "finished", 00:07:50.111 "queue_depth": 128, 00:07:50.111 "io_size": 4096, 00:07:50.111 "runtime": 10.005125, 00:07:50.111 "iops": 24091.653027823242, 00:07:50.111 "mibps": 94.10801963993454, 00:07:50.111 "io_failed": 0, 00:07:50.111 "io_timeout": 0, 00:07:50.111 "avg_latency_us": 5310.0225372591785, 00:07:50.111 "min_latency_us": 3120.7619047619046, 00:07:50.111 "max_latency_us": 12982.369523809524 00:07:50.111 } 00:07:50.111 ], 00:07:50.111 "core_count": 1 00:07:50.111 } 00:07:50.111 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1758217 00:07:50.111 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1758217 ']' 00:07:50.111 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1758217 00:07:50.111 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:50.111 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.111 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1758217 00:07:50.111 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:50.111 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:50.111 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1758217' 00:07:50.111 killing process with pid 1758217 00:07:50.111 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1758217 00:07:50.111 Received shutdown signal, test time was about 10.000000 seconds 00:07:50.111 00:07:50.111 Latency(us) 00:07:50.111 [2024-12-09T16:18:16.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.111 [2024-12-09T16:18:16.651Z] =================================================================================================================== 00:07:50.111 [2024-12-09T16:18:16.651Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:50.111 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1758217 00:07:50.111 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:50.369 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:50.370 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:50.370 17:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1755061 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1755061 00:07:50.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1755061 Killed "${NVMF_APP[@]}" "$@" 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1760628 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1760628 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1760628 ']' 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.628 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:50.887 [2024-12-09 17:18:17.177841] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:07:50.887 [2024-12-09 17:18:17.177886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.887 [2024-12-09 17:18:17.253270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.887 [2024-12-09 17:18:17.292382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.887 [2024-12-09 17:18:17.292417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.887 [2024-12-09 17:18:17.292424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.887 [2024-12-09 17:18:17.292433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.887 [2024-12-09 17:18:17.292438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.887 [2024-12-09 17:18:17.292937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.887 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.887 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:50.887 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:50.887 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:50.887 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:50.887 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.887 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:51.145 [2024-12-09 17:18:17.590521] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:51.145 [2024-12-09 17:18:17.590609] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:51.145 [2024-12-09 17:18:17.590636] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:51.145 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:51.145 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 42958859-c99f-42ff-b670-f0b0fd467aa2 00:07:51.145 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=42958859-c99f-42ff-b670-f0b0fd467aa2 00:07:51.145 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.145 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:51.145 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.145 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.145 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:51.403 17:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 42958859-c99f-42ff-b670-f0b0fd467aa2 -t 2000 00:07:51.690 [ 00:07:51.690 { 00:07:51.690 "name": "42958859-c99f-42ff-b670-f0b0fd467aa2", 00:07:51.690 "aliases": [ 00:07:51.690 "lvs/lvol" 00:07:51.690 ], 00:07:51.690 "product_name": "Logical Volume", 00:07:51.690 "block_size": 4096, 00:07:51.690 "num_blocks": 38912, 00:07:51.690 "uuid": "42958859-c99f-42ff-b670-f0b0fd467aa2", 00:07:51.690 "assigned_rate_limits": { 00:07:51.690 "rw_ios_per_sec": 0, 00:07:51.690 "rw_mbytes_per_sec": 0, 00:07:51.690 "r_mbytes_per_sec": 0, 00:07:51.690 "w_mbytes_per_sec": 0 00:07:51.690 }, 00:07:51.690 "claimed": false, 00:07:51.690 "zoned": false, 00:07:51.690 "supported_io_types": { 00:07:51.690 "read": true, 00:07:51.690 "write": true, 00:07:51.690 "unmap": true, 00:07:51.690 "flush": false, 00:07:51.690 "reset": true, 00:07:51.690 "nvme_admin": false, 00:07:51.690 "nvme_io": false, 00:07:51.691 "nvme_io_md": false, 00:07:51.691 "write_zeroes": true, 00:07:51.691 "zcopy": false, 00:07:51.691 "get_zone_info": false, 00:07:51.691 "zone_management": false, 00:07:51.691 "zone_append": false, 00:07:51.691 "compare": false, 00:07:51.691 "compare_and_write": false, 00:07:51.691 "abort": false, 00:07:51.691 "seek_hole": true, 00:07:51.691 "seek_data": true, 00:07:51.691 "copy": false, 00:07:51.691 "nvme_iov_md": false 00:07:51.691 }, 00:07:51.691 "driver_specific": { 00:07:51.691 "lvol": { 00:07:51.691 "lvol_store_uuid": "daa0af78-ca49-4209-a715-fdae60aacecc", 00:07:51.691 "base_bdev": "aio_bdev", 00:07:51.691 "thin_provision": false, 00:07:51.691 "num_allocated_clusters": 38, 00:07:51.691 "snapshot": false, 00:07:51.691 "clone": false, 00:07:51.691 "esnap_clone": false 00:07:51.691 } 00:07:51.691 } 00:07:51.691 } 00:07:51.691 ] 00:07:51.691 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:51.691 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:51.691 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:51.691 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:51.691 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:51.691 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:51.976 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:51.976 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:52.265 [2024-12-09 17:18:18.555538] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:52.265 request: 00:07:52.265 { 00:07:52.265 "uuid": "daa0af78-ca49-4209-a715-fdae60aacecc", 00:07:52.265 "method": "bdev_lvol_get_lvstores", 00:07:52.265 "req_id": 1 00:07:52.265 } 00:07:52.265 Got JSON-RPC error response 00:07:52.265 response: 00:07:52.265 { 00:07:52.265 "code": -19, 00:07:52.265 "message": "No such device" 00:07:52.265 } 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.265 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:52.557 aio_bdev 00:07:52.557 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 42958859-c99f-42ff-b670-f0b0fd467aa2 00:07:52.557 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=42958859-c99f-42ff-b670-f0b0fd467aa2 00:07:52.557 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.557 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:52.557 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.557 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.557 17:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:52.815 17:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 42958859-c99f-42ff-b670-f0b0fd467aa2 -t 2000 00:07:52.815 [ 00:07:52.815 { 00:07:52.816 "name": "42958859-c99f-42ff-b670-f0b0fd467aa2", 00:07:52.816 "aliases": [ 00:07:52.816 "lvs/lvol" 00:07:52.816 ], 00:07:52.816 "product_name": "Logical Volume", 00:07:52.816 "block_size": 4096, 00:07:52.816 "num_blocks": 38912, 00:07:52.816 "uuid": "42958859-c99f-42ff-b670-f0b0fd467aa2", 00:07:52.816 "assigned_rate_limits": { 00:07:52.816 "rw_ios_per_sec": 0, 00:07:52.816 "rw_mbytes_per_sec": 0, 00:07:52.816 "r_mbytes_per_sec": 0, 00:07:52.816 "w_mbytes_per_sec": 0 00:07:52.816 }, 00:07:52.816 "claimed": false, 00:07:52.816 "zoned": false, 00:07:52.816 "supported_io_types": { 00:07:52.816 "read": true, 00:07:52.816 "write": true, 00:07:52.816 "unmap": true, 00:07:52.816 "flush": false, 00:07:52.816 "reset": true, 00:07:52.816 "nvme_admin": false, 00:07:52.816 "nvme_io": false, 00:07:52.816 "nvme_io_md": false, 00:07:52.816 "write_zeroes": true, 00:07:52.816 "zcopy": false, 00:07:52.816 "get_zone_info": false, 00:07:52.816 "zone_management": false, 00:07:52.816 "zone_append": false, 00:07:52.816 "compare": false, 00:07:52.816 "compare_and_write": false, 00:07:52.816 "abort": false, 00:07:52.816 "seek_hole": true, 00:07:52.816 "seek_data": true, 00:07:52.816 "copy": false, 00:07:52.816 "nvme_iov_md": false 00:07:52.816 }, 00:07:52.816 "driver_specific": { 00:07:52.816 "lvol": { 00:07:52.816 "lvol_store_uuid": "daa0af78-ca49-4209-a715-fdae60aacecc", 00:07:52.816 "base_bdev": "aio_bdev", 00:07:52.816 "thin_provision": false, 00:07:52.816 "num_allocated_clusters": 38, 00:07:52.816 "snapshot": false, 00:07:52.816 "clone": false, 00:07:52.816 "esnap_clone": false 00:07:52.816 } 00:07:52.816 } 00:07:52.816 } 00:07:52.816 ] 00:07:52.816 17:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:52.816 17:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:52.816 17:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:53.074 17:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:53.074 17:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:53.074 17:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:53.332 17:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:53.332 17:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 42958859-c99f-42ff-b670-f0b0fd467aa2 00:07:53.590 17:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u daa0af78-ca49-4209-a715-fdae60aacecc 00:07:53.848 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:53.848 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:53.848 00:07:53.848 real 0m17.015s 00:07:53.848 user 0m43.999s 00:07:53.848 sys 0m3.745s 00:07:53.848 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.848 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:53.848 ************************************ 00:07:53.848 END TEST lvs_grow_dirty 00:07:53.848 ************************************ 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:54.106 nvmf_trace.0 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.106 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:54.106 rmmod nvme_tcp 00:07:54.107 rmmod nvme_fabrics 00:07:54.107 rmmod nvme_keyring 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1760628 ']' 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1760628 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1760628 ']' 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1760628 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1760628 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1760628' 00:07:54.107 killing process with pid 1760628 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1760628 00:07:54.107 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1760628 00:07:54.365 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.365 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.365 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.365 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:54.365 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:54.365 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.365 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.365 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.365 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.365 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.365 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.365 17:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.271 17:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:56.271 00:07:56.271 real 0m41.973s 00:07:56.271 user 1m4.824s 00:07:56.271 sys 0m10.228s 00:07:56.271 17:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.271 17:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:56.271 ************************************ 00:07:56.271 END TEST nvmf_lvs_grow 00:07:56.271 ************************************ 00:07:56.529 17:18:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:56.529 17:18:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:56.529 17:18:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.529 17:18:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.529 ************************************ 00:07:56.529 START TEST nvmf_bdev_io_wait 00:07:56.529 ************************************ 00:07:56.529 17:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:56.529 * Looking for test storage... 00:07:56.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.529 17:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:56.529 17:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:56.529 17:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:56.529 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:56.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.530 --rc genhtml_branch_coverage=1 00:07:56.530 --rc genhtml_function_coverage=1 00:07:56.530 --rc genhtml_legend=1 00:07:56.530 --rc geninfo_all_blocks=1 00:07:56.530 --rc geninfo_unexecuted_blocks=1 00:07:56.530 00:07:56.530 ' 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:56.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.530 --rc genhtml_branch_coverage=1 00:07:56.530 --rc genhtml_function_coverage=1 00:07:56.530 --rc genhtml_legend=1 00:07:56.530 --rc geninfo_all_blocks=1 00:07:56.530 --rc geninfo_unexecuted_blocks=1 00:07:56.530 00:07:56.530 ' 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:56.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.530 --rc genhtml_branch_coverage=1 00:07:56.530 --rc genhtml_function_coverage=1 00:07:56.530 --rc genhtml_legend=1 00:07:56.530 --rc geninfo_all_blocks=1 00:07:56.530 --rc geninfo_unexecuted_blocks=1 00:07:56.530 00:07:56.530 ' 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:56.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.530 --rc genhtml_branch_coverage=1 00:07:56.530 --rc genhtml_function_coverage=1 00:07:56.530 --rc genhtml_legend=1 00:07:56.530 --rc geninfo_all_blocks=1 00:07:56.530 --rc geninfo_unexecuted_blocks=1 00:07:56.530 00:07:56.530 ' 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.530 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:56.789 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:03.359 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:03.359 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.359 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:03.360 Found net devices under 0000:af:00.0: cvl_0_0 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:03.360 Found net devices under 0000:af:00.1: cvl_0_1 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:08:03.360 00:08:03.360 --- 10.0.0.2 ping statistics --- 00:08:03.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.360 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:08:03.360 00:08:03.360 --- 10.0.0.1 ping statistics --- 00:08:03.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.360 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:08:03.360 17:18:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1764833 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1764833 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1764833 ']' 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.360 [2024-12-09 17:18:29.098589] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:08:03.360 [2024-12-09 17:18:29.098638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.360 [2024-12-09 17:18:29.177791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.360 [2024-12-09 17:18:29.220084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.360 [2024-12-09 17:18:29.220122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.360 [2024-12-09 17:18:29.220129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.360 [2024-12-09 17:18:29.220135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.360 [2024-12-09 17:18:29.220140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.360 [2024-12-09 17:18:29.221585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.360 [2024-12-09 17:18:29.221693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.360 [2024-12-09 17:18:29.221803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.360 [2024-12-09 17:18:29.221804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.360 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.361 [2024-12-09 17:18:29.353428] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.361 Malloc0 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.361 [2024-12-09 17:18:29.408700] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1764855 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1764857 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:03.361 { 00:08:03.361 "params": { 00:08:03.361 "name": "Nvme$subsystem", 00:08:03.361 "trtype": "$TEST_TRANSPORT", 00:08:03.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.361 "adrfam": "ipv4", 00:08:03.361 "trsvcid": "$NVMF_PORT", 00:08:03.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.361 "hdgst": ${hdgst:-false}, 00:08:03.361 "ddgst": ${ddgst:-false} 00:08:03.361 }, 00:08:03.361 "method": "bdev_nvme_attach_controller" 00:08:03.361 } 00:08:03.361 EOF 00:08:03.361 )") 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1764859 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:03.361 { 00:08:03.361 "params": { 00:08:03.361 "name": "Nvme$subsystem", 00:08:03.361 "trtype": "$TEST_TRANSPORT", 00:08:03.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.361 "adrfam": "ipv4", 00:08:03.361 "trsvcid": "$NVMF_PORT", 00:08:03.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.361 "hdgst": ${hdgst:-false}, 00:08:03.361 "ddgst": ${ddgst:-false} 00:08:03.361 }, 00:08:03.361 "method": "bdev_nvme_attach_controller" 00:08:03.361 } 00:08:03.361 EOF 00:08:03.361 )") 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1764862 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:03.361 { 00:08:03.361 "params": { 00:08:03.361 "name": "Nvme$subsystem", 00:08:03.361 "trtype": "$TEST_TRANSPORT", 00:08:03.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.361 "adrfam": "ipv4", 00:08:03.361 "trsvcid": "$NVMF_PORT", 00:08:03.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.361 "hdgst": ${hdgst:-false}, 00:08:03.361 "ddgst": ${ddgst:-false} 00:08:03.361 }, 00:08:03.361 "method": "bdev_nvme_attach_controller" 00:08:03.361 } 00:08:03.361 EOF 00:08:03.361 )") 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:03.361 { 00:08:03.361 "params": { 00:08:03.361 "name": "Nvme$subsystem", 00:08:03.361 "trtype": "$TEST_TRANSPORT", 00:08:03.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.361 "adrfam": "ipv4", 00:08:03.361 "trsvcid": "$NVMF_PORT", 00:08:03.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.361 "hdgst": ${hdgst:-false}, 00:08:03.361 "ddgst": ${ddgst:-false} 00:08:03.361 }, 00:08:03.361 "method": "bdev_nvme_attach_controller" 00:08:03.361 } 00:08:03.361 EOF 00:08:03.361 )") 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1764855 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:03.361 "params": { 00:08:03.361 "name": "Nvme1", 00:08:03.361 "trtype": "tcp", 00:08:03.361 "traddr": "10.0.0.2", 00:08:03.361 "adrfam": "ipv4", 00:08:03.361 "trsvcid": "4420", 00:08:03.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:03.361 "hdgst": false, 00:08:03.361 "ddgst": false 00:08:03.361 }, 00:08:03.361 "method": "bdev_nvme_attach_controller" 00:08:03.361 }' 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:03.361 "params": { 00:08:03.361 "name": "Nvme1", 00:08:03.361 "trtype": "tcp", 00:08:03.361 "traddr": "10.0.0.2", 00:08:03.361 "adrfam": "ipv4", 00:08:03.361 "trsvcid": "4420", 00:08:03.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:03.361 "hdgst": false, 00:08:03.361 "ddgst": false 00:08:03.361 }, 00:08:03.361 "method": "bdev_nvme_attach_controller" 00:08:03.361 }' 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:03.361 "params": { 00:08:03.361 "name": "Nvme1", 00:08:03.361 "trtype": "tcp", 00:08:03.361 "traddr": "10.0.0.2", 00:08:03.361 "adrfam": "ipv4", 00:08:03.361 "trsvcid": "4420", 00:08:03.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:03.361 "hdgst": false, 00:08:03.361 "ddgst": false 00:08:03.361 }, 00:08:03.361 "method": "bdev_nvme_attach_controller" 00:08:03.361 }' 00:08:03.361 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:03.362 17:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:03.362 "params": { 00:08:03.362 "name": "Nvme1", 00:08:03.362 "trtype": "tcp", 00:08:03.362 "traddr": "10.0.0.2", 00:08:03.362 "adrfam": "ipv4", 00:08:03.362 "trsvcid": "4420", 00:08:03.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:03.362 "hdgst": false, 00:08:03.362 "ddgst": false 00:08:03.362 }, 00:08:03.362 "method": "bdev_nvme_attach_controller" 00:08:03.362 }' 00:08:03.362 [2024-12-09 17:18:29.460365] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:08:03.362 [2024-12-09 17:18:29.460414] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:03.362 [2024-12-09 17:18:29.462295] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:08:03.362 [2024-12-09 17:18:29.462323] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:08:03.362 [2024-12-09 17:18:29.462335] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:03.362 [2024-12-09 17:18:29.462358] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:03.362 [2024-12-09 17:18:29.462727] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:08:03.362 [2024-12-09 17:18:29.462762] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:03.362 [2024-12-09 17:18:29.634637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.362 [2024-12-09 17:18:29.679652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:03.362 [2024-12-09 17:18:29.728974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.362 [2024-12-09 17:18:29.782914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:03.362 [2024-12-09 17:18:29.832675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.362 [2024-12-09 17:18:29.877269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:03.620 [2024-12-09 17:18:29.927867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.620 [2024-12-09 17:18:29.983418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:03.620 Running I/O for 1 seconds... 00:08:03.620 Running I/O for 1 seconds... 00:08:03.620 Running I/O for 1 seconds... 00:08:03.877 Running I/O for 1 seconds... 00:08:04.811 13885.00 IOPS, 54.24 MiB/s 00:08:04.811 Latency(us) 00:08:04.811 [2024-12-09T16:18:31.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.811 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:04.811 Nvme1n1 : 1.01 13932.02 54.42 0.00 0.00 9158.48 4930.80 17975.59 00:08:04.811 [2024-12-09T16:18:31.351Z] =================================================================================================================== 00:08:04.811 [2024-12-09T16:18:31.351Z] Total : 13932.02 54.42 0.00 0.00 9158.48 4930.80 17975.59 00:08:04.811 6468.00 IOPS, 25.27 MiB/s 00:08:04.811 Latency(us) 00:08:04.811 [2024-12-09T16:18:31.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.811 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:04.811 Nvme1n1 : 1.01 6520.27 25.47 0.00 0.00 19488.06 8488.47 25090.93 00:08:04.811 [2024-12-09T16:18:31.351Z] =================================================================================================================== 00:08:04.811 [2024-12-09T16:18:31.351Z] Total : 6520.27 25.47 0.00 0.00 19488.06 8488.47 25090.93 00:08:04.811 241232.00 IOPS, 942.31 MiB/s 00:08:04.811 Latency(us) 00:08:04.811 [2024-12-09T16:18:31.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.811 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:04.811 Nvme1n1 : 1.00 240867.62 940.89 0.00 0.00 528.44 224.30 1497.97 00:08:04.811 [2024-12-09T16:18:31.351Z] =================================================================================================================== 00:08:04.811 [2024-12-09T16:18:31.351Z] Total : 240867.62 940.89 0.00 0.00 528.44 224.30 1497.97 00:08:04.811 6615.00 IOPS, 25.84 MiB/s 00:08:04.811 Latency(us) 00:08:04.811 [2024-12-09T16:18:31.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.811 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:04.811 Nvme1n1 : 1.01 6717.42 26.24 0.00 0.00 18999.62 4244.24 41443.72 00:08:04.811 [2024-12-09T16:18:31.351Z] =================================================================================================================== 00:08:04.811 [2024-12-09T16:18:31.351Z] Total : 6717.42 26.24 0.00 0.00 18999.62 4244.24 41443.72 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1764857 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1764859 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1764862 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.811 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.811 rmmod nvme_tcp 00:08:04.811 rmmod nvme_fabrics 00:08:05.070 rmmod nvme_keyring 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1764833 ']' 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1764833 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1764833 ']' 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1764833 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1764833 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1764833' 00:08:05.070 killing process with pid 1764833 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1764833 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1764833 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.070 17:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:07.605 00:08:07.605 real 0m10.786s 00:08:07.605 user 0m16.257s 00:08:07.605 sys 0m6.054s 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.605 ************************************ 00:08:07.605 END TEST nvmf_bdev_io_wait 00:08:07.605 ************************************ 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.605 ************************************ 00:08:07.605 START TEST nvmf_queue_depth 00:08:07.605 ************************************ 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:07.605 * Looking for test storage... 00:08:07.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:07.605 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.606 --rc genhtml_branch_coverage=1 00:08:07.606 --rc genhtml_function_coverage=1 00:08:07.606 --rc genhtml_legend=1 00:08:07.606 --rc geninfo_all_blocks=1 00:08:07.606 --rc geninfo_unexecuted_blocks=1 00:08:07.606 00:08:07.606 ' 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.606 --rc genhtml_branch_coverage=1 00:08:07.606 --rc genhtml_function_coverage=1 00:08:07.606 --rc genhtml_legend=1 00:08:07.606 --rc geninfo_all_blocks=1 00:08:07.606 --rc geninfo_unexecuted_blocks=1 00:08:07.606 00:08:07.606 ' 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.606 --rc genhtml_branch_coverage=1 00:08:07.606 --rc genhtml_function_coverage=1 00:08:07.606 --rc genhtml_legend=1 00:08:07.606 --rc geninfo_all_blocks=1 00:08:07.606 --rc geninfo_unexecuted_blocks=1 00:08:07.606 00:08:07.606 ' 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.606 --rc genhtml_branch_coverage=1 00:08:07.606 --rc genhtml_function_coverage=1 00:08:07.606 --rc genhtml_legend=1 00:08:07.606 --rc geninfo_all_blocks=1 00:08:07.606 --rc geninfo_unexecuted_blocks=1 00:08:07.606 00:08:07.606 ' 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.606 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:14.176 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:14.177 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:14.177 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:14.177 Found net devices under 0000:af:00.0: cvl_0_0 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:14.177 Found net devices under 0000:af:00.1: cvl_0_1 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:14.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:08:14.177 00:08:14.177 --- 10.0.0.2 ping statistics --- 00:08:14.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.177 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:08:14.177 00:08:14.177 --- 10.0.0.1 ping statistics --- 00:08:14.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.177 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1768709 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1768709 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1768709 ']' 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.177 17:18:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.177 [2024-12-09 17:18:39.975001] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:08:14.177 [2024-12-09 17:18:39.975048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.177 [2024-12-09 17:18:40.054594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.177 [2024-12-09 17:18:40.099074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.177 [2024-12-09 17:18:40.099109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.177 [2024-12-09 17:18:40.099120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.177 [2024-12-09 17:18:40.099126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.177 [2024-12-09 17:18:40.099131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.178 [2024-12-09 17:18:40.099600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.178 [2024-12-09 17:18:40.243577] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.178 Malloc0 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.178 [2024-12-09 17:18:40.293808] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1768821 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1768821 /var/tmp/bdevperf.sock 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1768821 ']' 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:14.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.178 [2024-12-09 17:18:40.345023] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:08:14.178 [2024-12-09 17:18:40.345066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768821 ] 00:08:14.178 [2024-12-09 17:18:40.418305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.178 [2024-12-09 17:18:40.459765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.178 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.437 NVMe0n1 00:08:14.437 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.437 17:18:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:14.437 Running I/O for 10 seconds... 00:08:16.304 11836.00 IOPS, 46.23 MiB/s [2024-12-09T16:18:44.218Z] 12028.00 IOPS, 46.98 MiB/s [2024-12-09T16:18:45.152Z] 12229.33 IOPS, 47.77 MiB/s [2024-12-09T16:18:46.085Z] 12281.50 IOPS, 47.97 MiB/s [2024-12-09T16:18:47.019Z] 12283.20 IOPS, 47.98 MiB/s [2024-12-09T16:18:47.952Z] 12285.67 IOPS, 47.99 MiB/s [2024-12-09T16:18:48.887Z] 12368.86 IOPS, 48.32 MiB/s [2024-12-09T16:18:50.259Z] 12395.00 IOPS, 48.42 MiB/s [2024-12-09T16:18:51.193Z] 12392.00 IOPS, 48.41 MiB/s [2024-12-09T16:18:51.193Z] 12427.80 IOPS, 48.55 MiB/s 00:08:24.653 Latency(us) 00:08:24.653 [2024-12-09T16:18:51.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.653 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:24.653 Verification LBA range: start 0x0 length 0x4000 00:08:24.653 NVMe0n1 : 10.06 12452.85 48.64 0.00 0.00 81929.67 18599.74 55424.73 00:08:24.653 [2024-12-09T16:18:51.193Z] =================================================================================================================== 00:08:24.653 [2024-12-09T16:18:51.193Z] Total : 12452.85 48.64 0.00 0.00 81929.67 18599.74 55424.73 00:08:24.653 { 00:08:24.653 "results": [ 00:08:24.653 { 00:08:24.653 "job": "NVMe0n1", 00:08:24.653 "core_mask": "0x1", 00:08:24.653 "workload": "verify", 00:08:24.653 "status": "finished", 00:08:24.653 "verify_range": { 00:08:24.653 "start": 0, 00:08:24.653 "length": 16384 00:08:24.653 }, 00:08:24.653 "queue_depth": 1024, 00:08:24.653 "io_size": 4096, 00:08:24.653 "runtime": 10.05826, 00:08:24.653 "iops": 12452.849697661424, 00:08:24.653 "mibps": 48.64394413148994, 00:08:24.653 "io_failed": 0, 00:08:24.653 "io_timeout": 0, 00:08:24.653 "avg_latency_us": 81929.67467583965, 00:08:24.653 "min_latency_us": 18599.74095238095, 00:08:24.653 "max_latency_us": 55424.73142857143 00:08:24.653 } 00:08:24.653 ], 00:08:24.653 "core_count": 1 00:08:24.653 } 00:08:24.653 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1768821 00:08:24.653 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1768821 ']' 00:08:24.653 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1768821 00:08:24.653 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:24.653 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.653 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1768821 00:08:24.653 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.653 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.653 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1768821' 00:08:24.653 killing process with pid 1768821 00:08:24.653 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1768821 00:08:24.653 Received shutdown signal, test time was about 10.000000 seconds 00:08:24.653 00:08:24.653 Latency(us) 00:08:24.653 [2024-12-09T16:18:51.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.653 [2024-12-09T16:18:51.193Z] =================================================================================================================== 00:08:24.653 [2024-12-09T16:18:51.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:24.653 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1768821 00:08:24.653 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:24.653 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:24.653 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:24.653 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:24.653 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:24.654 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:24.654 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:24.654 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:24.654 rmmod nvme_tcp 00:08:24.654 rmmod nvme_fabrics 00:08:24.654 rmmod nvme_keyring 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1768709 ']' 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1768709 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1768709 ']' 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1768709 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1768709 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1768709' 00:08:24.912 killing process with pid 1768709 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1768709 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1768709 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.912 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:25.170 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.170 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.170 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.076 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:27.076 00:08:27.076 real 0m19.786s 00:08:27.076 user 0m23.192s 00:08:27.076 sys 0m6.052s 00:08:27.076 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.076 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.076 ************************************ 00:08:27.076 END TEST nvmf_queue_depth 00:08:27.076 ************************************ 00:08:27.076 17:18:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:27.076 17:18:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:27.076 17:18:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.076 17:18:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.076 ************************************ 00:08:27.076 START TEST nvmf_target_multipath 00:08:27.076 ************************************ 00:08:27.076 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:27.335 * Looking for test storage... 00:08:27.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.335 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:27.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.336 --rc genhtml_branch_coverage=1 00:08:27.336 --rc genhtml_function_coverage=1 00:08:27.336 --rc genhtml_legend=1 00:08:27.336 --rc geninfo_all_blocks=1 00:08:27.336 --rc geninfo_unexecuted_blocks=1 00:08:27.336 00:08:27.336 ' 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:27.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.336 --rc genhtml_branch_coverage=1 00:08:27.336 --rc genhtml_function_coverage=1 00:08:27.336 --rc genhtml_legend=1 00:08:27.336 --rc geninfo_all_blocks=1 00:08:27.336 --rc geninfo_unexecuted_blocks=1 00:08:27.336 00:08:27.336 ' 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:27.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.336 --rc genhtml_branch_coverage=1 00:08:27.336 --rc genhtml_function_coverage=1 00:08:27.336 --rc genhtml_legend=1 00:08:27.336 --rc geninfo_all_blocks=1 00:08:27.336 --rc geninfo_unexecuted_blocks=1 00:08:27.336 00:08:27.336 ' 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:27.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.336 --rc genhtml_branch_coverage=1 00:08:27.336 --rc genhtml_function_coverage=1 00:08:27.336 --rc genhtml_legend=1 00:08:27.336 --rc geninfo_all_blocks=1 00:08:27.336 --rc geninfo_unexecuted_blocks=1 00:08:27.336 00:08:27.336 ' 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:27.336 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:33.907 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:33.907 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:33.907 Found net devices under 0000:af:00.0: cvl_0_0 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:33.907 Found net devices under 0000:af:00.1: cvl_0_1 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:33.907 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:33.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:08:33.908 00:08:33.908 --- 10.0.0.2 ping statistics --- 00:08:33.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.908 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:08:33.908 00:08:33.908 --- 10.0.0.1 ping statistics --- 00:08:33.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.908 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:33.908 only one NIC for nvmf test 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.908 rmmod nvme_tcp 00:08:33.908 rmmod nvme_fabrics 00:08:33.908 rmmod nvme_keyring 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.908 17:18:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:35.814 00:08:35.814 real 0m8.319s 00:08:35.814 user 0m1.785s 00:08:35.814 sys 0m4.552s 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:35.814 ************************************ 00:08:35.814 END TEST nvmf_target_multipath 00:08:35.814 ************************************ 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.814 ************************************ 00:08:35.814 START TEST nvmf_zcopy 00:08:35.814 ************************************ 00:08:35.814 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:35.814 * Looking for test storage... 00:08:35.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.814 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:35.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.815 --rc genhtml_branch_coverage=1 00:08:35.815 --rc genhtml_function_coverage=1 00:08:35.815 --rc genhtml_legend=1 00:08:35.815 --rc geninfo_all_blocks=1 00:08:35.815 --rc geninfo_unexecuted_blocks=1 00:08:35.815 00:08:35.815 ' 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:35.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.815 --rc genhtml_branch_coverage=1 00:08:35.815 --rc genhtml_function_coverage=1 00:08:35.815 --rc genhtml_legend=1 00:08:35.815 --rc geninfo_all_blocks=1 00:08:35.815 --rc geninfo_unexecuted_blocks=1 00:08:35.815 00:08:35.815 ' 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:35.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.815 --rc genhtml_branch_coverage=1 00:08:35.815 --rc genhtml_function_coverage=1 00:08:35.815 --rc genhtml_legend=1 00:08:35.815 --rc geninfo_all_blocks=1 00:08:35.815 --rc geninfo_unexecuted_blocks=1 00:08:35.815 00:08:35.815 ' 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:35.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.815 --rc genhtml_branch_coverage=1 00:08:35.815 --rc genhtml_function_coverage=1 00:08:35.815 --rc genhtml_legend=1 00:08:35.815 --rc geninfo_all_blocks=1 00:08:35.815 --rc geninfo_unexecuted_blocks=1 00:08:35.815 00:08:35.815 ' 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:35.815 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:42.434 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:42.434 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:42.434 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:42.435 Found net devices under 0000:af:00.0: cvl_0_0 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:42.435 Found net devices under 0000:af:00.1: cvl_0_1 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:42.435 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:42.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:08:42.435 00:08:42.435 --- 10.0.0.2 ping statistics --- 00:08:42.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.435 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:08:42.435 00:08:42.435 --- 10.0.0.1 ping statistics --- 00:08:42.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.435 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1777558 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1777558 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1777558 ']' 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.435 [2024-12-09 17:19:08.198443] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:08:42.435 [2024-12-09 17:19:08.198485] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.435 [2024-12-09 17:19:08.258718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.435 [2024-12-09 17:19:08.297852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.435 [2024-12-09 17:19:08.297889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.435 [2024-12-09 17:19:08.297897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.435 [2024-12-09 17:19:08.297903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.435 [2024-12-09 17:19:08.297909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.435 [2024-12-09 17:19:08.298398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.435 [2024-12-09 17:19:08.445402] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.435 [2024-12-09 17:19:08.465608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.435 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.436 malloc0 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:42.436 { 00:08:42.436 "params": { 00:08:42.436 "name": "Nvme$subsystem", 00:08:42.436 "trtype": "$TEST_TRANSPORT", 00:08:42.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.436 "adrfam": "ipv4", 00:08:42.436 "trsvcid": "$NVMF_PORT", 00:08:42.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.436 "hdgst": ${hdgst:-false}, 00:08:42.436 "ddgst": ${ddgst:-false} 00:08:42.436 }, 00:08:42.436 "method": "bdev_nvme_attach_controller" 00:08:42.436 } 00:08:42.436 EOF 00:08:42.436 )") 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:42.436 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:42.436 "params": { 00:08:42.436 "name": "Nvme1", 00:08:42.436 "trtype": "tcp", 00:08:42.436 "traddr": "10.0.0.2", 00:08:42.436 "adrfam": "ipv4", 00:08:42.436 "trsvcid": "4420", 00:08:42.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:42.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:42.436 "hdgst": false, 00:08:42.436 "ddgst": false 00:08:42.436 }, 00:08:42.436 "method": "bdev_nvme_attach_controller" 00:08:42.436 }' 00:08:42.436 [2024-12-09 17:19:08.551025] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:08:42.436 [2024-12-09 17:19:08.551067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777579 ] 00:08:42.436 [2024-12-09 17:19:08.625979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.436 [2024-12-09 17:19:08.665630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.436 Running I/O for 10 seconds... 00:08:44.742 8740.00 IOPS, 68.28 MiB/s [2024-12-09T16:19:12.216Z] 8786.00 IOPS, 68.64 MiB/s [2024-12-09T16:19:13.149Z] 8826.33 IOPS, 68.96 MiB/s [2024-12-09T16:19:14.083Z] 8846.75 IOPS, 69.12 MiB/s [2024-12-09T16:19:15.015Z] 8860.20 IOPS, 69.22 MiB/s [2024-12-09T16:19:16.389Z] 8865.67 IOPS, 69.26 MiB/s [2024-12-09T16:19:17.322Z] 8875.00 IOPS, 69.34 MiB/s [2024-12-09T16:19:18.301Z] 8882.62 IOPS, 69.40 MiB/s [2024-12-09T16:19:19.250Z] 8889.00 IOPS, 69.45 MiB/s [2024-12-09T16:19:19.250Z] 8891.50 IOPS, 69.46 MiB/s 00:08:52.710 Latency(us) 00:08:52.710 [2024-12-09T16:19:19.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.710 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:52.710 Verification LBA range: start 0x0 length 0x1000 00:08:52.710 Nvme1n1 : 10.01 8894.34 69.49 0.00 0.00 14350.26 2371.78 23343.30 00:08:52.710 [2024-12-09T16:19:19.250Z] =================================================================================================================== 00:08:52.710 [2024-12-09T16:19:19.250Z] Total : 8894.34 69.49 0.00 0.00 14350.26 2371.78 23343.30 00:08:52.710 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1779373 00:08:52.710 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:52.710 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.711 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:52.711 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:52.711 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:52.711 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.711 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.711 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.711 { 00:08:52.711 "params": { 00:08:52.711 "name": "Nvme$subsystem", 00:08:52.711 "trtype": "$TEST_TRANSPORT", 00:08:52.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.711 "adrfam": "ipv4", 00:08:52.711 "trsvcid": "$NVMF_PORT", 00:08:52.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.711 "hdgst": ${hdgst:-false}, 00:08:52.711 "ddgst": ${ddgst:-false} 00:08:52.711 }, 00:08:52.711 "method": "bdev_nvme_attach_controller" 00:08:52.711 } 00:08:52.711 EOF 00:08:52.711 )") 00:08:52.711 [2024-12-09 17:19:19.142674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-12-09 17:19:19.142707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:52.711 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:52.711 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:52.711 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.711 "params": { 00:08:52.711 "name": "Nvme1", 00:08:52.711 "trtype": "tcp", 00:08:52.711 "traddr": "10.0.0.2", 00:08:52.711 "adrfam": "ipv4", 00:08:52.711 "trsvcid": "4420", 00:08:52.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.711 "hdgst": false, 00:08:52.711 "ddgst": false 00:08:52.711 }, 00:08:52.711 "method": "bdev_nvme_attach_controller" 00:08:52.711 }' 00:08:52.711 [2024-12-09 17:19:19.154672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-12-09 17:19:19.154686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-12-09 17:19:19.166701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-12-09 17:19:19.166712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-12-09 17:19:19.178728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-12-09 17:19:19.178738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-12-09 17:19:19.183726] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:08:52.711 [2024-12-09 17:19:19.183768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779373 ] 00:08:52.711 [2024-12-09 17:19:19.190761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-12-09 17:19:19.190772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-12-09 17:19:19.202794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-12-09 17:19:19.202804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-12-09 17:19:19.214830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-12-09 17:19:19.214841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-12-09 17:19:19.226869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-12-09 17:19:19.226885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-12-09 17:19:19.238893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-12-09 17:19:19.238903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-12-09 17:19:19.250951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-12-09 17:19:19.250977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.257789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.969 [2024-12-09 17:19:19.262980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.263001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.274994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.275008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.287024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.287035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.297563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.969 [2024-12-09 17:19:19.299055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.299066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.311094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.311109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.323127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.323148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.335152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.335170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.347184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.347197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.359219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.359232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.371243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.371255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.383275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.383284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.395323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.395343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.407351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.407366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.419383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.419398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.431422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.431436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.443444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.443455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-12-09 17:19:19.455481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.455499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 Running I/O for 5 seconds... 00:08:52.969 [2024-12-09 17:19:19.467508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-12-09 17:19:19.467525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.970 [2024-12-09 17:19:19.483057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.970 [2024-12-09 17:19:19.483079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.970 [2024-12-09 17:19:19.497177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.970 [2024-12-09 17:19:19.497198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.512115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.512137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.526710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.526732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.541029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.541049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.554967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.554991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.568366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.568387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.581979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.582000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.595488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.595507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.608997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.609017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.623469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.623488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.639809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.639829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.650949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.650968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.664983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.665005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.678710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.678730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.692403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.692422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.706032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.706052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.719570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.719600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.733267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.733294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.747248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.747268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-12-09 17:19:19.760935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-12-09 17:19:19.760954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.774812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.774834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.789231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.789251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.804732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.804753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.818500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.818520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.832281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.832301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.845605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.845627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.859386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.859405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.872917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.872936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.887204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.887223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.898458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.898477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.912311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.912331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.926391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.926410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.939962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.939982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.953529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.953550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.967494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.967514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.981164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.981190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:19.995137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:19.995164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:20.009401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:20.009423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.487 [2024-12-09 17:19:20.024104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.487 [2024-12-09 17:19:20.024126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-12-09 17:19:20.040438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-12-09 17:19:20.040460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-12-09 17:19:20.054314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.054334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.068229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.068249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.082130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.082150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.096276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.096295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.110063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.110082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.123737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.123756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.137271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.137290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.151361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.151380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.165518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.165538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.179873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.179892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.195851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.195871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.209710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.209729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.223176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.223195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.237373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.237392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.251652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.251670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.267294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.267319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.746 [2024-12-09 17:19:20.281292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.746 [2024-12-09 17:19:20.281311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.003 [2024-12-09 17:19:20.294969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.003 [2024-12-09 17:19:20.294989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.003 [2024-12-09 17:19:20.309473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.003 [2024-12-09 17:19:20.309492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.003 [2024-12-09 17:19:20.325108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.003 [2024-12-09 17:19:20.325127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.003 [2024-12-09 17:19:20.339118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.003 [2024-12-09 17:19:20.339138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.003 [2024-12-09 17:19:20.352692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.003 [2024-12-09 17:19:20.352711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.003 [2024-12-09 17:19:20.366930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.003 [2024-12-09 17:19:20.366954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.003 [2024-12-09 17:19:20.380529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.003 [2024-12-09 17:19:20.380548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.003 [2024-12-09 17:19:20.394231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.004 [2024-12-09 17:19:20.394250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.004 [2024-12-09 17:19:20.408239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.004 [2024-12-09 17:19:20.408259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.004 [2024-12-09 17:19:20.422213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.004 [2024-12-09 17:19:20.422233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.004 [2024-12-09 17:19:20.435759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.004 [2024-12-09 17:19:20.435779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.004 [2024-12-09 17:19:20.449422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.004 [2024-12-09 17:19:20.449442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.004 [2024-12-09 17:19:20.463341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.004 [2024-12-09 17:19:20.463360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.004 16784.00 IOPS, 131.12 MiB/s [2024-12-09T16:19:20.544Z] [2024-12-09 17:19:20.477593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.004 [2024-12-09 17:19:20.477611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.004 [2024-12-09 17:19:20.493376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.004 [2024-12-09 17:19:20.493395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.004 [2024-12-09 17:19:20.506987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.004 [2024-12-09 17:19:20.507006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.004 [2024-12-09 17:19:20.520731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.004 [2024-12-09 17:19:20.520751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.004 [2024-12-09 17:19:20.534619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.004 [2024-12-09 17:19:20.534639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.548753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.548773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.562756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.562776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.573620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.573639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.587735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.587755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.601094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.601114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.615192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.615211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.628805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.628824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.642594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.642612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.656303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.656322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.670109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.670128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.683792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.683811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.697874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.697894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.711801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.711820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.725796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.725814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.739397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.739416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.753314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-12-09 17:19:20.753335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-12-09 17:19:20.767016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.262 [2024-12-09 17:19:20.767035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.262 [2024-12-09 17:19:20.780667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.262 [2024-12-09 17:19:20.780686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.262 [2024-12-09 17:19:20.794763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.262 [2024-12-09 17:19:20.794782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.808730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.808752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.822566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.822586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.836334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.836355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.849814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.849833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.863248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.863269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.876740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.876759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.890615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.890635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.904116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.904135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.917937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.917957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.931802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.931821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.945911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.945930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.959598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.959617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.973471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.973491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:20.986836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:20.986855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:21.000656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:21.000676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:21.014715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:21.014735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:21.028304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:21.028324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:21.042154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:21.042184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.520 [2024-12-09 17:19:21.056028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.520 [2024-12-09 17:19:21.056048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.069993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.070014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.083727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.083747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.097607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.097627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.111473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.111493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.125145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.125165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.138975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.138994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.152925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.152945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.166534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.166552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.180246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.180266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.194474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.194493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.205206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.205225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.219539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.219558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.233385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.233404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.247052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.247071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.260753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.260772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.274947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.274966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.285603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.285622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.300052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.300075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-12-09 17:19:21.313438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-12-09 17:19:21.313457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.327440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.327460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.341560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.341579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.355506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.355525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.369057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.369077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.383448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.383468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.395057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.395075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.409222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.409241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.422740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.422759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.436559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.436578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.450547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.450565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.463921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.463940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 16888.50 IOPS, 131.94 MiB/s [2024-12-09T16:19:21.577Z] [2024-12-09 17:19:21.477485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.477504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.491274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.491293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.504906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.504926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.518828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.518847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.532102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.532121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.545890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.545909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.559357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.559380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.037 [2024-12-09 17:19:21.573193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.037 [2024-12-09 17:19:21.573212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.587174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.587196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.600917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.600936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.614437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.614456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.627607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.627626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.641913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.641932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.656640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.656659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.670875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.670895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.684683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.684702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.698282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.698300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.712180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.712200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.722742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.722762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.736943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.736965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.750670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.750689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.764463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.764480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.777915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.777934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.791821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.791839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.805029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.805048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.818713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.818731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.296 [2024-12-09 17:19:21.832595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.296 [2024-12-09 17:19:21.832613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:21.846491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:21.846511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:21.859678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:21.859698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:21.873209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:21.873228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:21.886979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:21.886999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:21.900898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:21.900917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:21.914568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:21.914587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:21.927978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:21.927997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:21.941689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:21.941709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:21.955494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:21.955513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:21.969225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:21.969245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:21.982791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:21.982810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:21.996660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:21.996679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:22.010275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:22.010295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:22.024143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:22.024161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:22.038119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:22.038138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:22.051698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:22.051719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:22.065506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:22.065525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:22.079223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:22.079242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.554 [2024-12-09 17:19:22.093528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.554 [2024-12-09 17:19:22.093550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.104255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.104275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.118645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.118664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.132070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.132089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.146225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.146245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.159964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.159983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.171076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.171095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.185413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.185443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.199001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.199021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.212834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.212852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.226236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.226261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.239742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.239762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.253662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.253681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.268074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.268094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.279622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.279642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.293675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.293694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.307252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.307271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.320829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.320848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.334948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.334967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.813 [2024-12-09 17:19:22.348445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.813 [2024-12-09 17:19:22.348465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.362470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.362490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.375886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.375906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.389367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.389386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.402973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.402993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.416671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.416692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.430718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.430737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.444317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.444337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.457785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.457804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 16947.33 IOPS, 132.40 MiB/s [2024-12-09T16:19:22.611Z] [2024-12-09 17:19:22.471161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.471186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.484851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.484871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.498224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.498244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.512179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.512199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.526242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.526262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.540084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.540103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.553985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.554005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.567627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.567648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.581134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.581159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.595082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.595102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.071 [2024-12-09 17:19:22.608830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.071 [2024-12-09 17:19:22.608851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.623021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.623041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.636476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.636496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.649916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.649935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.664395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.664414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.678083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.678102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.691770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.691789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.705561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.705581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.719040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.719061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.732705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.732725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.746199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.746217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.759902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.759922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.773311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.773330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.786897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.786916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.800874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.800892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.814995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.815014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.828458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.828478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.842191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.842214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.855768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.855787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.330 [2024-12-09 17:19:22.869890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.330 [2024-12-09 17:19:22.869910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:22.883488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:22.883508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:22.897374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:22.897393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:22.911171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:22.911190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:22.925093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:22.925113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:22.939030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:22.939049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:22.952410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:22.952429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:22.966041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:22.966061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:22.979896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:22.979915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:22.993337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:22.993355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:23.007153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:23.007178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:23.020955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:23.020973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:23.035027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:23.035046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:23.045718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:23.045737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:23.060141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:23.060161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:23.073654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:23.073675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:23.087014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:23.087033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:23.100954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:23.100978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:23.114745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.588 [2024-12-09 17:19:23.114765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.588 [2024-12-09 17:19:23.128824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.589 [2024-12-09 17:19:23.128844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.142905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.142926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.156590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.156610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.170343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.170361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.183913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.183933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.198146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.198164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.209349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.209369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.223627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.223646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.237343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.237362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.251425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.251444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.265218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.265237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.279190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.279209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.293076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.293096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.306905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.306924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.320961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.320980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.334770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.334789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.348523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.348542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.361864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.361887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.847 [2024-12-09 17:19:23.375512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.847 [2024-12-09 17:19:23.375531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-12-09 17:19:23.389462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-12-09 17:19:23.389482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-12-09 17:19:23.402854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-12-09 17:19:23.402874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.416346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.416365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.429988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.430007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.443850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.443870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.457133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.457153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.470993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.471011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 16986.00 IOPS, 132.70 MiB/s [2024-12-09T16:19:23.646Z] [2024-12-09 17:19:23.484427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.484446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.497603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.497622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.511160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.511183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.525076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.525095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.538869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.538888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.553055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.553074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.566781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.566800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.580444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.580464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.593966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.593986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.607616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.607637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.621278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.621298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-12-09 17:19:23.634797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-12-09 17:19:23.634816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.648710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.648732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.662415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.662436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.676386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.676406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.690362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.690382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.703645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.703665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.717548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.717567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.731644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.731664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.742381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.742401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.756415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.756435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.770074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.770093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.783677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.783698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.796995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.797015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.810590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.810610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.824619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.824639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.838516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.838536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.852258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.852278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.865856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.865877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.879468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.879487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.364 [2024-12-09 17:19:23.893434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.364 [2024-12-09 17:19:23.893453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:23.906977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:23.906998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:23.920585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:23.920606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:23.934079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:23.934098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:23.947603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:23.947623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:23.961453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:23.961472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:23.975437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:23.975456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:23.989447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:23.989466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:24.003335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:24.003354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:24.017009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:24.017028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:24.030552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:24.030571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:24.044229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:24.044249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:24.057728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:24.057747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:24.071761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:24.071781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:24.082256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:24.082275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:24.096142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:24.096162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:24.109802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:24.109822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:24.123459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:24.123483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:24.137100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:24.137118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.623 [2024-12-09 17:19:24.150693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.623 [2024-12-09 17:19:24.150713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.164538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.164558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.178079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.178099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.191649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.191671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.205774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.205794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.219544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.219562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.233254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.233274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.247197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.247216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.260853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.260872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.274627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.274645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.288332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.288350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.302435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.302454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.316221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.316240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.330409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.330428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.340746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.340765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.354369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.354387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.368110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.368129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.381373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.381396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.395132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.395151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.882 [2024-12-09 17:19:24.408965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.882 [2024-12-09 17:19:24.408984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.423312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.423342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.436944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.436964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.450851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.450870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.464643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.464662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 16999.40 IOPS, 132.81 MiB/s [2024-12-09T16:19:24.681Z] [2024-12-09 17:19:24.477846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.477865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 00:08:58.141 Latency(us) 00:08:58.141 [2024-12-09T16:19:24.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.141 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:58.141 Nvme1n1 : 5.01 17004.15 132.84 0.00 0.00 7520.27 3401.63 17226.61 00:08:58.141 [2024-12-09T16:19:24.681Z] =================================================================================================================== 00:08:58.141 [2024-12-09T16:19:24.681Z] Total : 17004.15 132.84 0.00 0.00 7520.27 3401.63 17226.61 00:08:58.141 [2024-12-09 17:19:24.486940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.486957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.498967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.498982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.511009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.511028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.523042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.523057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.535075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.535089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.547104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.547116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.559135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.559148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.571164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.571180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.583196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.583218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.595226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.595238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.607257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.607268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.619293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.619305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 [2024-12-09 17:19:24.631322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.141 [2024-12-09 17:19:24.631332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1779373) - No such process 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1779373 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.141 delay0 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.141 17:19:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:58.399 [2024-12-09 17:19:24.762704] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:04.952 Initializing NVMe Controllers 00:09:04.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:04.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:04.952 Initialization complete. Launching workers. 00:09:04.952 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 128 00:09:04.952 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 415, failed to submit 33 00:09:04.952 success 224, unsuccessful 191, failed 0 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.952 rmmod nvme_tcp 00:09:04.952 rmmod nvme_fabrics 00:09:04.952 rmmod nvme_keyring 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1777558 ']' 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1777558 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1777558 ']' 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1777558 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1777558 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1777558' 00:09:04.952 killing process with pid 1777558 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1777558 00:09:04.952 17:19:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1777558 00:09:04.952 17:19:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:04.952 17:19:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:04.952 17:19:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:04.952 17:19:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:04.952 17:19:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:04.952 17:19:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:04.952 17:19:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:04.952 17:19:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.952 17:19:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:04.952 17:19:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.952 17:19:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.952 17:19:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.858 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:06.858 00:09:06.858 real 0m31.228s 00:09:06.858 user 0m41.772s 00:09:06.858 sys 0m11.001s 00:09:06.858 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.858 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.858 ************************************ 00:09:06.858 END TEST nvmf_zcopy 00:09:06.858 ************************************ 00:09:06.858 17:19:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:06.858 17:19:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.858 17:19:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.858 17:19:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.858 ************************************ 00:09:06.858 START TEST nvmf_nmic 00:09:06.858 ************************************ 00:09:06.858 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:06.858 * Looking for test storage... 00:09:06.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.858 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:06.858 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:06.858 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:07.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.118 --rc genhtml_branch_coverage=1 00:09:07.118 --rc genhtml_function_coverage=1 00:09:07.118 --rc genhtml_legend=1 00:09:07.118 --rc geninfo_all_blocks=1 00:09:07.118 --rc geninfo_unexecuted_blocks=1 00:09:07.118 00:09:07.118 ' 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:07.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.118 --rc genhtml_branch_coverage=1 00:09:07.118 --rc genhtml_function_coverage=1 00:09:07.118 --rc genhtml_legend=1 00:09:07.118 --rc geninfo_all_blocks=1 00:09:07.118 --rc geninfo_unexecuted_blocks=1 00:09:07.118 00:09:07.118 ' 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:07.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.118 --rc genhtml_branch_coverage=1 00:09:07.118 --rc genhtml_function_coverage=1 00:09:07.118 --rc genhtml_legend=1 00:09:07.118 --rc geninfo_all_blocks=1 00:09:07.118 --rc geninfo_unexecuted_blocks=1 00:09:07.118 00:09:07.118 ' 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:07.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.118 --rc genhtml_branch_coverage=1 00:09:07.118 --rc genhtml_function_coverage=1 00:09:07.118 --rc genhtml_legend=1 00:09:07.118 --rc geninfo_all_blocks=1 00:09:07.118 --rc geninfo_unexecuted_blocks=1 00:09:07.118 00:09:07.118 ' 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:07.118 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:07.119 17:19:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.687 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:13.688 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:13.688 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:13.688 Found net devices under 0000:af:00.0: cvl_0_0 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:13.688 Found net devices under 0000:af:00.1: cvl_0_1 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:09:13.688 00:09:13.688 --- 10.0.0.2 ping statistics --- 00:09:13.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.688 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:09:13.688 00:09:13.688 --- 10.0.0.1 ping statistics --- 00:09:13.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.688 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.688 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1784855 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1784855 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1784855 ']' 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 [2024-12-09 17:19:39.610807] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:09:13.689 [2024-12-09 17:19:39.610852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.689 [2024-12-09 17:19:39.685895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.689 [2024-12-09 17:19:39.725717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.689 [2024-12-09 17:19:39.725756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.689 [2024-12-09 17:19:39.725763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.689 [2024-12-09 17:19:39.725768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.689 [2024-12-09 17:19:39.725773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.689 [2024-12-09 17:19:39.727212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.689 [2024-12-09 17:19:39.727259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.689 [2024-12-09 17:19:39.727344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.689 [2024-12-09 17:19:39.727345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 [2024-12-09 17:19:39.876939] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 Malloc0 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 [2024-12-09 17:19:39.947495] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:13.689 test case1: single bdev can't be used in multiple subsystems 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.689 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 [2024-12-09 17:19:39.971375] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:13.689 [2024-12-09 17:19:39.971396] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:13.689 [2024-12-09 17:19:39.971404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.689 request: 00:09:13.689 { 00:09:13.689 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:13.689 "namespace": { 00:09:13.689 "bdev_name": "Malloc0", 00:09:13.689 "no_auto_visible": false, 00:09:13.689 "hide_metadata": false 00:09:13.689 }, 00:09:13.689 "method": "nvmf_subsystem_add_ns", 00:09:13.689 "req_id": 1 00:09:13.689 } 00:09:13.689 Got JSON-RPC error response 00:09:13.689 response: 00:09:13.689 { 00:09:13.689 "code": -32602, 00:09:13.689 "message": "Invalid parameters" 00:09:13.690 } 00:09:13.690 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:13.690 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:13.690 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:13.690 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:13.690 Adding namespace failed - expected result. 00:09:13.690 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:13.690 test case2: host connect to nvmf target in multiple paths 00:09:13.690 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:13.690 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.690 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.690 [2024-12-09 17:19:39.983520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:13.690 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.690 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.622 17:19:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:15.995 17:19:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.995 17:19:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:15.995 17:19:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.995 17:19:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:15.995 17:19:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:17.893 17:19:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:17.893 17:19:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:17.893 17:19:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.893 17:19:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:17.893 17:19:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.893 17:19:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:17.893 17:19:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:17.893 [global] 00:09:17.893 thread=1 00:09:17.893 invalidate=1 00:09:17.893 rw=write 00:09:17.893 time_based=1 00:09:17.893 runtime=1 00:09:17.893 ioengine=libaio 00:09:17.893 direct=1 00:09:17.893 bs=4096 00:09:17.893 iodepth=1 00:09:17.893 norandommap=0 00:09:17.893 numjobs=1 00:09:17.893 00:09:17.893 verify_dump=1 00:09:17.893 verify_backlog=512 00:09:17.893 verify_state_save=0 00:09:17.893 do_verify=1 00:09:17.893 verify=crc32c-intel 00:09:17.893 [job0] 00:09:17.894 filename=/dev/nvme0n1 00:09:17.894 Could not set queue depth (nvme0n1) 00:09:18.151 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.151 fio-3.35 00:09:18.151 Starting 1 thread 00:09:19.525 00:09:19.525 job0: (groupid=0, jobs=1): err= 0: pid=1785855: Mon Dec 9 17:19:45 2024 00:09:19.525 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:19.525 slat (nsec): min=6879, max=38782, avg=7786.04, stdev=1322.48 00:09:19.525 clat (usec): min=149, max=302, avg=219.89, stdev=37.11 00:09:19.525 lat (usec): min=163, max=310, avg=227.68, stdev=37.13 00:09:19.525 clat percentiles (usec): 00:09:19.525 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:09:19.525 | 30.00th=[ 184], 40.00th=[ 212], 50.00th=[ 237], 60.00th=[ 245], 00:09:19.525 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:09:19.525 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 293], 99.95th=[ 297], 00:09:19.525 | 99.99th=[ 302] 00:09:19.525 write: IOPS=2678, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:09:19.525 slat (nsec): min=9756, max=43199, avg=10936.74, stdev=1885.26 00:09:19.525 clat (usec): min=110, max=228, avg=139.02, stdev=18.82 00:09:19.525 lat (usec): min=121, max=270, avg=149.96, stdev=19.27 00:09:19.525 clat percentiles (usec): 00:09:19.525 | 1.00th=[ 115], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 122], 00:09:19.525 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 131], 60.00th=[ 149], 00:09:19.525 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 169], 00:09:19.525 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 208], 00:09:19.525 | 99.99th=[ 229] 00:09:19.525 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:19.525 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:19.525 lat (usec) : 250=86.22%, 500=13.78% 00:09:19.525 cpu : usr=4.50%, sys=7.70%, ctx=5241, majf=0, minf=1 00:09:19.525 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.525 issued rwts: total=2560,2681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.525 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.525 00:09:19.525 Run status group 0 (all jobs): 00:09:19.525 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:09:19.525 WRITE: bw=10.5MiB/s (11.0MB/s), 10.5MiB/s-10.5MiB/s (11.0MB/s-11.0MB/s), io=10.5MiB (11.0MB), run=1001-1001msec 00:09:19.525 00:09:19.525 Disk stats (read/write): 00:09:19.525 nvme0n1: ios=2287/2560, merge=0/0, ticks=484/318, in_queue=802, util=91.38% 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.525 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.526 rmmod nvme_tcp 00:09:19.526 rmmod nvme_fabrics 00:09:19.526 rmmod nvme_keyring 00:09:19.526 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.526 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:19.526 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:19.526 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1784855 ']' 00:09:19.526 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1784855 00:09:19.526 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1784855 ']' 00:09:19.526 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1784855 00:09:19.526 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:19.526 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.526 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1784855 00:09:19.526 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.526 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.526 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1784855' 00:09:19.526 killing process with pid 1784855 00:09:19.526 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1784855 00:09:19.526 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1784855 00:09:19.784 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.784 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.784 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.784 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:19.784 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:19.784 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.784 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.784 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.784 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.784 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.784 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.784 17:19:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:22.321 00:09:22.321 real 0m14.976s 00:09:22.321 user 0m33.133s 00:09:22.321 sys 0m5.371s 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.321 ************************************ 00:09:22.321 END TEST nvmf_nmic 00:09:22.321 ************************************ 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.321 ************************************ 00:09:22.321 START TEST nvmf_fio_target 00:09:22.321 ************************************ 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:22.321 * Looking for test storage... 00:09:22.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:22.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.321 --rc genhtml_branch_coverage=1 00:09:22.321 --rc genhtml_function_coverage=1 00:09:22.321 --rc genhtml_legend=1 00:09:22.321 --rc geninfo_all_blocks=1 00:09:22.321 --rc geninfo_unexecuted_blocks=1 00:09:22.321 00:09:22.321 ' 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:22.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.321 --rc genhtml_branch_coverage=1 00:09:22.321 --rc genhtml_function_coverage=1 00:09:22.321 --rc genhtml_legend=1 00:09:22.321 --rc geninfo_all_blocks=1 00:09:22.321 --rc geninfo_unexecuted_blocks=1 00:09:22.321 00:09:22.321 ' 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:22.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.321 --rc genhtml_branch_coverage=1 00:09:22.321 --rc genhtml_function_coverage=1 00:09:22.321 --rc genhtml_legend=1 00:09:22.321 --rc geninfo_all_blocks=1 00:09:22.321 --rc geninfo_unexecuted_blocks=1 00:09:22.321 00:09:22.321 ' 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:22.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.321 --rc genhtml_branch_coverage=1 00:09:22.321 --rc genhtml_function_coverage=1 00:09:22.321 --rc genhtml_legend=1 00:09:22.321 --rc geninfo_all_blocks=1 00:09:22.321 --rc geninfo_unexecuted_blocks=1 00:09:22.321 00:09:22.321 ' 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:22.321 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.322 17:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.894 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:28.894 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:28.894 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:28.894 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:28.894 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:28.894 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:28.894 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:28.894 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:28.895 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:28.895 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:28.895 Found net devices under 0000:af:00.0: cvl_0_0 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:28.895 Found net devices under 0000:af:00.1: cvl_0_1 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.895 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:28.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:09:28.896 00:09:28.896 --- 10.0.0.2 ping statistics --- 00:09:28.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.896 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:28.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:09:28.896 00:09:28.896 --- 10.0.0.1 ping statistics --- 00:09:28.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.896 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1789605 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1789605 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1789605 ']' 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.896 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.896 [2024-12-09 17:19:54.599058] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:09:28.896 [2024-12-09 17:19:54.599100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.897 [2024-12-09 17:19:54.676249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.897 [2024-12-09 17:19:54.715258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.897 [2024-12-09 17:19:54.715294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.897 [2024-12-09 17:19:54.715301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.897 [2024-12-09 17:19:54.715307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.897 [2024-12-09 17:19:54.715311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.897 [2024-12-09 17:19:54.716632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.897 [2024-12-09 17:19:54.716739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.897 [2024-12-09 17:19:54.716844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.897 [2024-12-09 17:19:54.716846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.897 17:19:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.897 17:19:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:28.897 17:19:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:28.897 17:19:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.897 17:19:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.156 17:19:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.156 17:19:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:29.156 [2024-12-09 17:19:55.648959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.156 17:19:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:29.414 17:19:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:29.414 17:19:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:29.674 17:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:29.674 17:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:29.933 17:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:29.933 17:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.191 17:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:30.191 17:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:30.191 17:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.450 17:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:30.450 17:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.708 17:19:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:30.708 17:19:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.967 17:19:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:30.967 17:19:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:31.226 17:19:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:31.226 17:19:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:31.226 17:19:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.484 17:19:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:31.484 17:19:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:31.742 17:19:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.000 [2024-12-09 17:19:58.334377] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.000 17:19:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:32.258 17:19:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:32.259 17:19:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.636 17:19:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:33.636 17:19:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:33.636 17:19:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.636 17:19:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:33.636 17:19:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:33.636 17:19:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:35.539 17:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:35.539 17:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:35.539 17:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.539 17:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:35.539 17:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.539 17:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:35.539 17:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:35.539 [global] 00:09:35.539 thread=1 00:09:35.539 invalidate=1 00:09:35.539 rw=write 00:09:35.539 time_based=1 00:09:35.539 runtime=1 00:09:35.539 ioengine=libaio 00:09:35.539 direct=1 00:09:35.539 bs=4096 00:09:35.539 iodepth=1 00:09:35.539 norandommap=0 00:09:35.539 numjobs=1 00:09:35.539 00:09:35.539 verify_dump=1 00:09:35.539 verify_backlog=512 00:09:35.539 verify_state_save=0 00:09:35.539 do_verify=1 00:09:35.539 verify=crc32c-intel 00:09:35.539 [job0] 00:09:35.539 filename=/dev/nvme0n1 00:09:35.539 [job1] 00:09:35.539 filename=/dev/nvme0n2 00:09:35.539 [job2] 00:09:35.539 filename=/dev/nvme0n3 00:09:35.539 [job3] 00:09:35.539 filename=/dev/nvme0n4 00:09:35.539 Could not set queue depth (nvme0n1) 00:09:35.539 Could not set queue depth (nvme0n2) 00:09:35.539 Could not set queue depth (nvme0n3) 00:09:35.539 Could not set queue depth (nvme0n4) 00:09:35.797 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.797 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.797 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.797 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.797 fio-3.35 00:09:35.797 Starting 4 threads 00:09:37.176 00:09:37.176 job0: (groupid=0, jobs=1): err= 0: pid=1791016: Mon Dec 9 17:20:03 2024 00:09:37.176 read: IOPS=2049, BW=8200KiB/s (8397kB/s)(8208KiB/1001msec) 00:09:37.176 slat (nsec): min=6274, max=24155, avg=7039.90, stdev=738.01 00:09:37.176 clat (usec): min=181, max=551, avg=257.23, stdev=54.36 00:09:37.176 lat (usec): min=188, max=558, avg=264.27, stdev=54.40 00:09:37.176 clat percentiles (usec): 00:09:37.176 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 233], 00:09:37.176 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:09:37.176 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 424], 00:09:37.176 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 529], 99.95th=[ 529], 00:09:37.176 | 99.99th=[ 553] 00:09:37.176 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:37.176 slat (nsec): min=9069, max=55674, avg=10110.00, stdev=1440.45 00:09:37.176 clat (usec): min=106, max=364, avg=165.04, stdev=31.22 00:09:37.176 lat (usec): min=116, max=379, avg=175.15, stdev=31.40 00:09:37.176 clat percentiles (usec): 00:09:37.176 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:09:37.176 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 163], 00:09:37.176 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 231], 00:09:37.176 | 99.00th=[ 258], 99.50th=[ 273], 99.90th=[ 306], 99.95th=[ 326], 00:09:37.176 | 99.99th=[ 363] 00:09:37.176 bw ( KiB/s): min=11144, max=11144, per=46.30%, avg=11144.00, stdev= 0.00, samples=1 00:09:37.176 iops : min= 2786, max= 2786, avg=2786.00, stdev= 0.00, samples=1 00:09:37.176 lat (usec) : 250=83.65%, 500=16.05%, 750=0.30% 00:09:37.176 cpu : usr=2.30%, sys=4.20%, ctx=4612, majf=0, minf=1 00:09:37.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.176 issued rwts: total=2052,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.176 job1: (groupid=0, jobs=1): err= 0: pid=1791032: Mon Dec 9 17:20:03 2024 00:09:37.176 read: IOPS=25, BW=104KiB/s (106kB/s)(104KiB/1004msec) 00:09:37.176 slat (nsec): min=8073, max=22940, avg=19784.12, stdev=5421.91 00:09:37.176 clat (usec): min=208, max=41988, avg=34609.33, stdev=15071.84 00:09:37.176 lat (usec): min=230, max=42010, avg=34629.12, stdev=15070.66 00:09:37.176 clat percentiles (usec): 00:09:37.176 | 1.00th=[ 208], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[40633], 00:09:37.176 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:37.176 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:37.176 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:37.176 | 99.99th=[42206] 00:09:37.176 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:37.176 slat (nsec): min=9162, max=39711, avg=10262.20, stdev=1670.23 00:09:37.176 clat (usec): min=144, max=362, avg=189.97, stdev=20.45 00:09:37.176 lat (usec): min=154, max=402, avg=200.23, stdev=20.95 00:09:37.176 clat percentiles (usec): 00:09:37.176 | 1.00th=[ 151], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:09:37.176 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:09:37.176 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 217], 00:09:37.176 | 99.00th=[ 265], 99.50th=[ 297], 99.90th=[ 363], 99.95th=[ 363], 00:09:37.176 | 99.99th=[ 363] 00:09:37.176 bw ( KiB/s): min= 4096, max= 4096, per=17.02%, avg=4096.00, stdev= 0.00, samples=1 00:09:37.176 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:37.176 lat (usec) : 250=93.12%, 500=2.79% 00:09:37.176 lat (msec) : 50=4.09% 00:09:37.176 cpu : usr=0.30%, sys=0.50%, ctx=538, majf=0, minf=2 00:09:37.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.176 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.176 job2: (groupid=0, jobs=1): err= 0: pid=1791049: Mon Dec 9 17:20:03 2024 00:09:37.176 read: IOPS=2079, BW=8317KiB/s (8517kB/s)(8492KiB/1021msec) 00:09:37.176 slat (nsec): min=6623, max=27992, avg=7420.68, stdev=794.55 00:09:37.176 clat (usec): min=190, max=41251, avg=265.22, stdev=890.39 00:09:37.176 lat (usec): min=197, max=41261, avg=272.64, stdev=890.45 00:09:37.176 clat percentiles (usec): 00:09:37.176 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:09:37.176 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:09:37.176 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 281], 00:09:37.176 | 99.00th=[ 429], 99.50th=[ 461], 99.90th=[ 515], 99.95th=[ 529], 00:09:37.176 | 99.99th=[41157] 00:09:37.176 write: IOPS=2507, BW=9.79MiB/s (10.3MB/s)(10.0MiB/1021msec); 0 zone resets 00:09:37.176 slat (nsec): min=9439, max=45360, avg=11025.69, stdev=2222.53 00:09:37.176 clat (usec): min=117, max=483, avg=157.23, stdev=26.22 00:09:37.176 lat (usec): min=127, max=518, avg=168.26, stdev=27.19 00:09:37.176 clat percentiles (usec): 00:09:37.176 | 1.00th=[ 122], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 139], 00:09:37.176 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 153], 00:09:37.176 | 70.00th=[ 165], 80.00th=[ 180], 90.00th=[ 192], 95.00th=[ 202], 00:09:37.176 | 99.00th=[ 239], 99.50th=[ 277], 99.90th=[ 343], 99.95th=[ 347], 00:09:37.176 | 99.99th=[ 486] 00:09:37.176 bw ( KiB/s): min= 9664, max=10816, per=42.54%, avg=10240.00, stdev=814.59, samples=2 00:09:37.176 iops : min= 2416, max= 2704, avg=2560.00, stdev=203.65, samples=2 00:09:37.176 lat (usec) : 250=87.08%, 500=12.86%, 750=0.04% 00:09:37.176 lat (msec) : 50=0.02% 00:09:37.176 cpu : usr=2.45%, sys=4.41%, ctx=4683, majf=0, minf=2 00:09:37.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.176 issued rwts: total=2123,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.176 job3: (groupid=0, jobs=1): err= 0: pid=1791051: Mon Dec 9 17:20:03 2024 00:09:37.176 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:09:37.176 slat (nsec): min=9760, max=24299, avg=23076.95, stdev=3034.70 00:09:37.176 clat (usec): min=40909, max=41993, avg=41242.03, stdev=450.88 00:09:37.176 lat (usec): min=40934, max=42017, avg=41265.10, stdev=451.17 00:09:37.176 clat percentiles (usec): 00:09:37.176 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:37.176 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:37.176 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:37.176 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:37.176 | 99.99th=[42206] 00:09:37.176 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:09:37.176 slat (nsec): min=9691, max=45491, avg=11098.07, stdev=2327.29 00:09:37.176 clat (usec): min=144, max=380, avg=183.94, stdev=17.67 00:09:37.176 lat (usec): min=154, max=426, avg=195.04, stdev=18.65 00:09:37.176 clat percentiles (usec): 00:09:37.176 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 174], 00:09:37.176 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:09:37.176 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 208], 00:09:37.176 | 99.00th=[ 227], 99.50th=[ 255], 99.90th=[ 379], 99.95th=[ 379], 00:09:37.176 | 99.99th=[ 379] 00:09:37.176 bw ( KiB/s): min= 4096, max= 4096, per=17.02%, avg=4096.00, stdev= 0.00, samples=1 00:09:37.176 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:37.176 lat (usec) : 250=95.32%, 500=0.56% 00:09:37.176 lat (msec) : 50=4.12% 00:09:37.176 cpu : usr=0.30%, sys=0.50%, ctx=536, majf=0, minf=1 00:09:37.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.176 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.176 00:09:37.177 Run status group 0 (all jobs): 00:09:37.177 READ: bw=16.2MiB/s (16.9MB/s), 87.1KiB/s-8317KiB/s (89.2kB/s-8517kB/s), io=16.5MiB (17.3MB), run=1001-1021msec 00:09:37.177 WRITE: bw=23.5MiB/s (24.6MB/s), 2028KiB/s-9.99MiB/s (2076kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1021msec 00:09:37.177 00:09:37.177 Disk stats (read/write): 00:09:37.177 nvme0n1: ios=1896/2048, merge=0/0, ticks=480/338, in_queue=818, util=86.67% 00:09:37.177 nvme0n2: ios=41/512, merge=0/0, ticks=928/93, in_queue=1021, util=91.06% 00:09:37.177 nvme0n3: ios=1953/2048, merge=0/0, ticks=471/317, in_queue=788, util=88.95% 00:09:37.177 nvme0n4: ios=42/512, merge=0/0, ticks=1730/95, in_queue=1825, util=98.42% 00:09:37.177 17:20:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:37.177 [global] 00:09:37.177 thread=1 00:09:37.177 invalidate=1 00:09:37.177 rw=randwrite 00:09:37.177 time_based=1 00:09:37.177 runtime=1 00:09:37.177 ioengine=libaio 00:09:37.177 direct=1 00:09:37.177 bs=4096 00:09:37.177 iodepth=1 00:09:37.177 norandommap=0 00:09:37.177 numjobs=1 00:09:37.177 00:09:37.177 verify_dump=1 00:09:37.177 verify_backlog=512 00:09:37.177 verify_state_save=0 00:09:37.177 do_verify=1 00:09:37.177 verify=crc32c-intel 00:09:37.177 [job0] 00:09:37.177 filename=/dev/nvme0n1 00:09:37.177 [job1] 00:09:37.177 filename=/dev/nvme0n2 00:09:37.177 [job2] 00:09:37.177 filename=/dev/nvme0n3 00:09:37.177 [job3] 00:09:37.177 filename=/dev/nvme0n4 00:09:37.177 Could not set queue depth (nvme0n1) 00:09:37.177 Could not set queue depth (nvme0n2) 00:09:37.177 Could not set queue depth (nvme0n3) 00:09:37.177 Could not set queue depth (nvme0n4) 00:09:37.435 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.435 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.435 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.435 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.435 fio-3.35 00:09:37.435 Starting 4 threads 00:09:38.813 00:09:38.813 job0: (groupid=0, jobs=1): err= 0: pid=1791497: Mon Dec 9 17:20:05 2024 00:09:38.813 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:09:38.813 slat (nsec): min=10729, max=21709, avg=20580.73, stdev=2227.89 00:09:38.813 clat (usec): min=40871, max=41321, avg=40984.66, stdev=81.20 00:09:38.813 lat (usec): min=40893, max=41332, avg=41005.24, stdev=79.15 00:09:38.813 clat percentiles (usec): 00:09:38.813 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:38.813 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:38.813 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:38.813 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:38.813 | 99.99th=[41157] 00:09:38.813 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:38.813 slat (nsec): min=11056, max=37639, avg=12336.69, stdev=1989.83 00:09:38.813 clat (usec): min=135, max=888, avg=176.58, stdev=59.28 00:09:38.813 lat (usec): min=146, max=905, avg=188.92, stdev=59.52 00:09:38.813 clat percentiles (usec): 00:09:38.813 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:09:38.813 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:09:38.813 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 204], 00:09:38.813 | 99.00th=[ 519], 99.50th=[ 635], 99.90th=[ 889], 99.95th=[ 889], 00:09:38.813 | 99.99th=[ 889] 00:09:38.813 bw ( KiB/s): min= 4096, max= 4096, per=23.75%, avg=4096.00, stdev= 0.00, samples=1 00:09:38.813 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:38.813 lat (usec) : 250=94.01%, 500=0.56%, 750=0.94%, 1000=0.37% 00:09:38.813 lat (msec) : 50=4.12% 00:09:38.813 cpu : usr=0.90%, sys=0.50%, ctx=534, majf=0, minf=1 00:09:38.813 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.813 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.813 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.813 job1: (groupid=0, jobs=1): err= 0: pid=1791512: Mon Dec 9 17:20:05 2024 00:09:38.813 read: IOPS=156, BW=627KiB/s (642kB/s)(628KiB/1001msec) 00:09:38.813 slat (nsec): min=6560, max=23140, avg=9228.06, stdev=5177.98 00:09:38.813 clat (usec): min=187, max=41984, avg=5699.02, stdev=13973.45 00:09:38.813 lat (usec): min=194, max=42006, avg=5708.25, stdev=13978.29 00:09:38.813 clat percentiles (usec): 00:09:38.813 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 215], 00:09:38.813 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 233], 00:09:38.813 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[41157], 95.00th=[41157], 00:09:38.813 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:38.813 | 99.99th=[42206] 00:09:38.813 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:38.813 slat (nsec): min=8825, max=64983, avg=10152.24, stdev=2827.89 00:09:38.813 clat (usec): min=131, max=373, avg=190.40, stdev=25.39 00:09:38.813 lat (usec): min=141, max=382, avg=200.55, stdev=25.77 00:09:38.813 clat percentiles (usec): 00:09:38.813 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 159], 20.00th=[ 172], 00:09:38.813 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 198], 00:09:38.813 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 231], 00:09:38.813 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 375], 99.95th=[ 375], 00:09:38.813 | 99.99th=[ 375] 00:09:38.813 bw ( KiB/s): min= 4096, max= 4096, per=23.75%, avg=4096.00, stdev= 0.00, samples=1 00:09:38.813 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:38.813 lat (usec) : 250=94.47%, 500=2.39% 00:09:38.813 lat (msec) : 50=3.14% 00:09:38.813 cpu : usr=0.30%, sys=0.70%, ctx=669, majf=0, minf=1 00:09:38.813 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.813 issued rwts: total=157,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.813 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.813 job2: (groupid=0, jobs=1): err= 0: pid=1791513: Mon Dec 9 17:20:05 2024 00:09:38.813 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:09:38.813 slat (nsec): min=10348, max=24695, avg=23350.27, stdev=2919.47 00:09:38.813 clat (usec): min=40744, max=41985, avg=41048.37, stdev=306.43 00:09:38.813 lat (usec): min=40755, max=42010, avg=41071.72, stdev=307.07 00:09:38.813 clat percentiles (usec): 00:09:38.813 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:38.813 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:38.813 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:09:38.813 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:38.813 | 99.99th=[42206] 00:09:38.813 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:09:38.813 slat (nsec): min=9540, max=42980, avg=10607.98, stdev=2057.80 00:09:38.813 clat (usec): min=127, max=1300, avg=199.07, stdev=61.90 00:09:38.813 lat (usec): min=137, max=1311, avg=209.68, stdev=62.20 00:09:38.813 clat percentiles (usec): 00:09:38.813 | 1.00th=[ 137], 5.00th=[ 151], 10.00th=[ 161], 20.00th=[ 174], 00:09:38.813 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:09:38.813 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 241], 95.00th=[ 247], 00:09:38.813 | 99.00th=[ 347], 99.50th=[ 383], 99.90th=[ 1303], 99.95th=[ 1303], 00:09:38.813 | 99.99th=[ 1303] 00:09:38.813 bw ( KiB/s): min= 4096, max= 4096, per=23.75%, avg=4096.00, stdev= 0.00, samples=1 00:09:38.813 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:38.813 lat (usec) : 250=91.39%, 500=4.12%, 750=0.19% 00:09:38.813 lat (msec) : 2=0.19%, 50=4.12% 00:09:38.813 cpu : usr=0.39%, sys=0.49%, ctx=536, majf=0, minf=1 00:09:38.813 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.813 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.813 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.813 job3: (groupid=0, jobs=1): err= 0: pid=1791514: Mon Dec 9 17:20:05 2024 00:09:38.813 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:38.813 slat (nsec): min=6707, max=42339, avg=8013.10, stdev=1774.68 00:09:38.813 clat (usec): min=160, max=512, avg=203.42, stdev=14.78 00:09:38.813 lat (usec): min=167, max=520, avg=211.44, stdev=14.95 00:09:38.813 clat percentiles (usec): 00:09:38.813 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:09:38.813 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:09:38.813 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 225], 00:09:38.813 | 99.00th=[ 237], 99.50th=[ 243], 99.90th=[ 265], 99.95th=[ 359], 00:09:38.813 | 99.99th=[ 515] 00:09:38.813 write: IOPS=2833, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets 00:09:38.813 slat (nsec): min=9524, max=70818, avg=10785.73, stdev=1895.39 00:09:38.813 clat (usec): min=119, max=341, avg=145.34, stdev=17.09 00:09:38.813 lat (usec): min=129, max=379, avg=156.13, stdev=17.59 00:09:38.813 clat percentiles (usec): 00:09:38.813 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 130], 20.00th=[ 133], 00:09:38.813 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:09:38.813 | 70.00th=[ 151], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 180], 00:09:38.813 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 241], 99.95th=[ 285], 00:09:38.813 | 99.99th=[ 343] 00:09:38.813 bw ( KiB/s): min=12288, max=12288, per=71.25%, avg=12288.00, stdev= 0.00, samples=1 00:09:38.813 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:38.813 lat (usec) : 250=99.85%, 500=0.13%, 750=0.02% 00:09:38.813 cpu : usr=3.90%, sys=8.80%, ctx=5396, majf=0, minf=1 00:09:38.813 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.813 issued rwts: total=2560,2836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.813 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.813 00:09:38.813 Run status group 0 (all jobs): 00:09:38.813 READ: bw=10.6MiB/s (11.2MB/s), 86.8KiB/s-9.99MiB/s (88.9kB/s-10.5MB/s), io=10.8MiB (11.3MB), run=1001-1014msec 00:09:38.813 WRITE: bw=16.8MiB/s (17.7MB/s), 2020KiB/s-11.1MiB/s (2068kB/s-11.6MB/s), io=17.1MiB (17.9MB), run=1001-1014msec 00:09:38.813 00:09:38.813 Disk stats (read/write): 00:09:38.813 nvme0n1: ios=54/512, merge=0/0, ticks=799/88, in_queue=887, util=89.68% 00:09:38.813 nvme0n2: ios=19/512, merge=0/0, ticks=742/92, in_queue=834, util=86.62% 00:09:38.813 nvme0n3: ios=59/512, merge=0/0, ticks=1053/102, in_queue=1155, util=99.16% 00:09:38.813 nvme0n4: ios=2051/2560, merge=0/0, ticks=385/353, in_queue=738, util=89.65% 00:09:38.813 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:38.813 [global] 00:09:38.813 thread=1 00:09:38.813 invalidate=1 00:09:38.813 rw=write 00:09:38.813 time_based=1 00:09:38.813 runtime=1 00:09:38.813 ioengine=libaio 00:09:38.813 direct=1 00:09:38.813 bs=4096 00:09:38.813 iodepth=128 00:09:38.813 norandommap=0 00:09:38.814 numjobs=1 00:09:38.814 00:09:38.814 verify_dump=1 00:09:38.814 verify_backlog=512 00:09:38.814 verify_state_save=0 00:09:38.814 do_verify=1 00:09:38.814 verify=crc32c-intel 00:09:38.814 [job0] 00:09:38.814 filename=/dev/nvme0n1 00:09:38.814 [job1] 00:09:38.814 filename=/dev/nvme0n2 00:09:38.814 [job2] 00:09:38.814 filename=/dev/nvme0n3 00:09:38.814 [job3] 00:09:38.814 filename=/dev/nvme0n4 00:09:38.814 Could not set queue depth (nvme0n1) 00:09:38.814 Could not set queue depth (nvme0n2) 00:09:38.814 Could not set queue depth (nvme0n3) 00:09:38.814 Could not set queue depth (nvme0n4) 00:09:39.073 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:39.073 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:39.073 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:39.073 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:39.073 fio-3.35 00:09:39.073 Starting 4 threads 00:09:40.477 00:09:40.477 job0: (groupid=0, jobs=1): err= 0: pid=1791886: Mon Dec 9 17:20:06 2024 00:09:40.477 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:09:40.477 slat (nsec): min=1221, max=16585k, avg=133152.81, stdev=909960.72 00:09:40.477 clat (usec): min=6158, max=44138, avg=15994.17, stdev=6089.68 00:09:40.477 lat (usec): min=6167, max=44149, avg=16127.32, stdev=6179.14 00:09:40.477 clat percentiles (usec): 00:09:40.477 | 1.00th=[ 7635], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11207], 00:09:40.477 | 30.00th=[12256], 40.00th=[13042], 50.00th=[14484], 60.00th=[16057], 00:09:40.477 | 70.00th=[18220], 80.00th=[19792], 90.00th=[22938], 95.00th=[30278], 00:09:40.477 | 99.00th=[35914], 99.50th=[38011], 99.90th=[44303], 99.95th=[44303], 00:09:40.477 | 99.99th=[44303] 00:09:40.477 write: IOPS=3486, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1009msec); 0 zone resets 00:09:40.477 slat (usec): min=2, max=15198, avg=161.46, stdev=811.83 00:09:40.477 clat (usec): min=2226, max=53465, avg=22422.68, stdev=12028.67 00:09:40.477 lat (usec): min=2236, max=53473, avg=22584.15, stdev=12114.68 00:09:40.477 clat percentiles (usec): 00:09:40.477 | 1.00th=[ 6456], 5.00th=[ 7767], 10.00th=[ 8225], 20.00th=[10290], 00:09:40.477 | 30.00th=[12780], 40.00th=[15664], 50.00th=[20317], 60.00th=[25297], 00:09:40.477 | 70.00th=[29230], 80.00th=[33817], 90.00th=[40633], 95.00th=[44303], 00:09:40.477 | 99.00th=[47973], 99.50th=[48497], 99.90th=[53216], 99.95th=[53216], 00:09:40.477 | 99.99th=[53216] 00:09:40.477 bw ( KiB/s): min=10744, max=16351, per=23.12%, avg=13547.50, stdev=3964.75, samples=2 00:09:40.477 iops : min= 2686, max= 4087, avg=3386.50, stdev=990.66, samples=2 00:09:40.477 lat (msec) : 4=0.18%, 10=12.84%, 20=50.79%, 50=35.98%, 100=0.21% 00:09:40.477 cpu : usr=3.67%, sys=4.07%, ctx=309, majf=0, minf=1 00:09:40.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:40.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:40.477 issued rwts: total=3072,3518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:40.477 job1: (groupid=0, jobs=1): err= 0: pid=1791887: Mon Dec 9 17:20:06 2024 00:09:40.477 read: IOPS=2040, BW=8163KiB/s (8359kB/s)(8220KiB/1007msec) 00:09:40.477 slat (usec): min=2, max=22494, avg=149.50, stdev=1166.46 00:09:40.477 clat (usec): min=6107, max=85068, avg=16543.15, stdev=11700.33 00:09:40.477 lat (usec): min=6112, max=85076, avg=16692.65, stdev=11851.81 00:09:40.477 clat percentiles (usec): 00:09:40.477 | 1.00th=[ 6128], 5.00th=[ 7111], 10.00th=[ 7767], 20.00th=[10552], 00:09:40.477 | 30.00th=[11994], 40.00th=[12256], 50.00th=[13960], 60.00th=[14222], 00:09:40.477 | 70.00th=[14746], 80.00th=[16909], 90.00th=[28705], 95.00th=[40109], 00:09:40.477 | 99.00th=[72877], 99.50th=[82314], 99.90th=[85459], 99.95th=[85459], 00:09:40.477 | 99.99th=[85459] 00:09:40.477 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:09:40.477 slat (nsec): min=2000, max=44539k, avg=264326.26, stdev=1597997.36 00:09:40.477 clat (msec): min=3, max=129, avg=36.09, stdev=24.43 00:09:40.477 lat (msec): min=3, max=129, avg=36.35, stdev=24.59 00:09:40.477 clat percentiles (msec): 00:09:40.477 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 14], 00:09:40.477 | 30.00th=[ 25], 40.00th=[ 26], 50.00th=[ 32], 60.00th=[ 39], 00:09:40.477 | 70.00th=[ 41], 80.00th=[ 47], 90.00th=[ 71], 95.00th=[ 99], 00:09:40.477 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 130], 99.95th=[ 130], 00:09:40.477 | 99.99th=[ 130] 00:09:40.477 bw ( KiB/s): min= 8175, max=11328, per=16.64%, avg=9751.50, stdev=2229.51, samples=2 00:09:40.477 iops : min= 2043, max= 2832, avg=2437.50, stdev=557.91, samples=2 00:09:40.477 lat (msec) : 4=0.13%, 10=12.31%, 20=38.09%, 50=38.18%, 100=8.73% 00:09:40.477 lat (msec) : 250=2.56% 00:09:40.477 cpu : usr=1.99%, sys=2.88%, ctx=246, majf=0, minf=1 00:09:40.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:09:40.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:40.477 issued rwts: total=2055,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:40.477 job2: (groupid=0, jobs=1): err= 0: pid=1791888: Mon Dec 9 17:20:06 2024 00:09:40.477 read: IOPS=4331, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1004msec) 00:09:40.477 slat (nsec): min=1091, max=19805k, avg=118059.30, stdev=903489.56 00:09:40.477 clat (usec): min=2947, max=52402, avg=14133.34, stdev=7267.43 00:09:40.477 lat (usec): min=3049, max=52414, avg=14251.40, stdev=7348.20 00:09:40.477 clat percentiles (usec): 00:09:40.477 | 1.00th=[ 4424], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 8848], 00:09:40.477 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10945], 60.00th=[13173], 00:09:40.477 | 70.00th=[15926], 80.00th=[19792], 90.00th=[21890], 95.00th=[28181], 00:09:40.477 | 99.00th=[41157], 99.50th=[46400], 99.90th=[52167], 99.95th=[52167], 00:09:40.477 | 99.99th=[52167] 00:09:40.477 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:09:40.477 slat (usec): min=2, max=17926, avg=95.68, stdev=598.97 00:09:40.477 clat (usec): min=345, max=52411, avg=14260.19, stdev=11127.31 00:09:40.477 lat (usec): min=355, max=53608, avg=14355.87, stdev=11206.70 00:09:40.477 clat percentiles (usec): 00:09:40.477 | 1.00th=[ 1844], 5.00th=[ 4146], 10.00th=[ 6063], 20.00th=[ 8225], 00:09:40.477 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[10028], 00:09:40.477 | 70.00th=[11994], 80.00th=[19530], 90.00th=[34866], 95.00th=[41681], 00:09:40.477 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50594], 99.95th=[50594], 00:09:40.477 | 99.99th=[52167] 00:09:40.477 bw ( KiB/s): min=16384, max=20439, per=31.42%, avg=18411.50, stdev=2867.32, samples=2 00:09:40.477 iops : min= 4096, max= 5109, avg=4602.50, stdev=716.30, samples=2 00:09:40.477 lat (usec) : 500=0.03%, 750=0.12%, 1000=0.11% 00:09:40.477 lat (msec) : 2=0.31%, 4=2.02%, 10=47.05%, 20=32.73%, 50=17.37% 00:09:40.477 lat (msec) : 100=0.25% 00:09:40.477 cpu : usr=2.59%, sys=5.28%, ctx=492, majf=0, minf=2 00:09:40.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:40.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:40.477 issued rwts: total=4349,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:40.477 job3: (groupid=0, jobs=1): err= 0: pid=1791889: Mon Dec 9 17:20:06 2024 00:09:40.477 read: IOPS=3959, BW=15.5MiB/s (16.2MB/s)(16.1MiB/1044msec) 00:09:40.477 slat (nsec): min=1129, max=30286k, avg=125530.13, stdev=1066035.18 00:09:40.477 clat (usec): min=3077, max=70616, avg=16643.66, stdev=12374.84 00:09:40.477 lat (usec): min=3085, max=70640, avg=16769.19, stdev=12496.33 00:09:40.477 clat percentiles (usec): 00:09:40.477 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 6325], 20.00th=[ 8160], 00:09:40.477 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10552], 60.00th=[12911], 00:09:40.477 | 70.00th=[19530], 80.00th=[26346], 90.00th=[33817], 95.00th=[44827], 00:09:40.478 | 99.00th=[51119], 99.50th=[61604], 99.90th=[69731], 99.95th=[69731], 00:09:40.478 | 99.99th=[70779] 00:09:40.478 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:09:40.478 slat (usec): min=2, max=19454, avg=82.26, stdev=621.81 00:09:40.478 clat (usec): min=356, max=55636, avg=13788.52, stdev=10176.77 00:09:40.478 lat (usec): min=896, max=55642, avg=13870.78, stdev=10240.25 00:09:40.478 clat percentiles (usec): 00:09:40.478 | 1.00th=[ 1696], 5.00th=[ 4228], 10.00th=[ 5407], 20.00th=[ 6652], 00:09:40.478 | 30.00th=[ 7832], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[10290], 00:09:40.478 | 70.00th=[15926], 80.00th=[23462], 90.00th=[27919], 95.00th=[34866], 00:09:40.478 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50070], 99.95th=[51643], 00:09:40.478 | 99.99th=[55837] 00:09:40.478 bw ( KiB/s): min=15672, max=20439, per=30.81%, avg=18055.50, stdev=3370.78, samples=2 00:09:40.478 iops : min= 3918, max= 5109, avg=4513.50, stdev=842.16, samples=2 00:09:40.478 lat (usec) : 500=0.01% 00:09:40.478 lat (msec) : 2=0.62%, 4=1.77%, 10=48.16%, 20=24.07%, 50=24.07% 00:09:40.478 lat (msec) : 100=1.30% 00:09:40.478 cpu : usr=2.49%, sys=5.08%, ctx=360, majf=0, minf=1 00:09:40.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:40.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:40.478 issued rwts: total=4134,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:40.478 00:09:40.478 Run status group 0 (all jobs): 00:09:40.478 READ: bw=50.9MiB/s (53.4MB/s), 8163KiB/s-16.9MiB/s (8359kB/s-17.7MB/s), io=53.2MiB (55.7MB), run=1004-1044msec 00:09:40.478 WRITE: bw=57.2MiB/s (60.0MB/s), 9.93MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=59.7MiB (62.6MB), run=1004-1044msec 00:09:40.478 00:09:40.478 Disk stats (read/write): 00:09:40.478 nvme0n1: ios=2610/3063, merge=0/0, ticks=40806/62759, in_queue=103565, util=86.77% 00:09:40.478 nvme0n2: ios=2085/2111, merge=0/0, ticks=19472/35261, in_queue=54733, util=98.37% 00:09:40.478 nvme0n3: ios=3601/4063, merge=0/0, ticks=45738/55688, in_queue=101426, util=98.13% 00:09:40.478 nvme0n4: ios=3626/4071, merge=0/0, ticks=38244/36348, in_queue=74592, util=98.22% 00:09:40.478 17:20:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:40.478 [global] 00:09:40.478 thread=1 00:09:40.478 invalidate=1 00:09:40.478 rw=randwrite 00:09:40.478 time_based=1 00:09:40.478 runtime=1 00:09:40.478 ioengine=libaio 00:09:40.478 direct=1 00:09:40.478 bs=4096 00:09:40.478 iodepth=128 00:09:40.478 norandommap=0 00:09:40.478 numjobs=1 00:09:40.478 00:09:40.478 verify_dump=1 00:09:40.478 verify_backlog=512 00:09:40.478 verify_state_save=0 00:09:40.478 do_verify=1 00:09:40.478 verify=crc32c-intel 00:09:40.478 [job0] 00:09:40.478 filename=/dev/nvme0n1 00:09:40.478 [job1] 00:09:40.478 filename=/dev/nvme0n2 00:09:40.478 [job2] 00:09:40.478 filename=/dev/nvme0n3 00:09:40.478 [job3] 00:09:40.478 filename=/dev/nvme0n4 00:09:40.478 Could not set queue depth (nvme0n1) 00:09:40.478 Could not set queue depth (nvme0n2) 00:09:40.478 Could not set queue depth (nvme0n3) 00:09:40.478 Could not set queue depth (nvme0n4) 00:09:40.738 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.738 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.738 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.738 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.738 fio-3.35 00:09:40.738 Starting 4 threads 00:09:42.108 00:09:42.108 job0: (groupid=0, jobs=1): err= 0: pid=1792252: Mon Dec 9 17:20:08 2024 00:09:42.108 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:09:42.108 slat (nsec): min=1056, max=14498k, avg=112433.88, stdev=784827.77 00:09:42.108 clat (usec): min=2576, max=46344, avg=14753.14, stdev=7326.49 00:09:42.108 lat (usec): min=2578, max=46350, avg=14865.57, stdev=7385.81 00:09:42.108 clat percentiles (usec): 00:09:42.108 | 1.00th=[ 2966], 5.00th=[ 7635], 10.00th=[ 9503], 20.00th=[10421], 00:09:42.108 | 30.00th=[10945], 40.00th=[11207], 50.00th=[12256], 60.00th=[14091], 00:09:42.108 | 70.00th=[15008], 80.00th=[18482], 90.00th=[22676], 95.00th=[31589], 00:09:42.108 | 99.00th=[42730], 99.50th=[43779], 99.90th=[46400], 99.95th=[46400], 00:09:42.108 | 99.99th=[46400] 00:09:42.108 write: IOPS=4479, BW=17.5MiB/s (18.3MB/s)(17.6MiB/1007msec); 0 zone resets 00:09:42.108 slat (nsec): min=1788, max=11639k, avg=100037.81, stdev=665706.39 00:09:42.108 clat (usec): min=343, max=62022, avg=14859.90, stdev=11003.82 00:09:42.108 lat (usec): min=354, max=62026, avg=14959.94, stdev=11056.31 00:09:42.108 clat percentiles (usec): 00:09:42.108 | 1.00th=[ 996], 5.00th=[ 3359], 10.00th=[ 5932], 20.00th=[ 8225], 00:09:42.108 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[11863], 60.00th=[13042], 00:09:42.108 | 70.00th=[15139], 80.00th=[19268], 90.00th=[31589], 95.00th=[41157], 00:09:42.108 | 99.00th=[53740], 99.50th=[55837], 99.90th=[62129], 99.95th=[62129], 00:09:42.108 | 99.99th=[62129] 00:09:42.108 bw ( KiB/s): min=16384, max=18688, per=23.97%, avg=17536.00, stdev=1629.17, samples=2 00:09:42.108 iops : min= 4096, max= 4672, avg=4384.00, stdev=407.29, samples=2 00:09:42.108 lat (usec) : 500=0.07%, 750=0.07%, 1000=0.46% 00:09:42.108 lat (msec) : 2=0.74%, 4=3.06%, 10=22.88%, 20=53.79%, 50=17.72% 00:09:42.108 lat (msec) : 100=1.21% 00:09:42.108 cpu : usr=2.68%, sys=4.37%, ctx=320, majf=0, minf=1 00:09:42.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:42.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.108 issued rwts: total=4096,4511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.108 job1: (groupid=0, jobs=1): err= 0: pid=1792253: Mon Dec 9 17:20:08 2024 00:09:42.108 read: IOPS=5322, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1003msec) 00:09:42.108 slat (nsec): min=1351, max=16704k, avg=88826.66, stdev=566831.58 00:09:42.108 clat (usec): min=621, max=42859, avg=11659.85, stdev=4620.07 00:09:42.108 lat (usec): min=3282, max=42891, avg=11748.67, stdev=4644.50 00:09:42.108 clat percentiles (usec): 00:09:42.108 | 1.00th=[ 6259], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[ 9634], 00:09:42.108 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10683], 60.00th=[11076], 00:09:42.108 | 70.00th=[11731], 80.00th=[12256], 90.00th=[13829], 95.00th=[18744], 00:09:42.108 | 99.00th=[39060], 99.50th=[39584], 99.90th=[42730], 99.95th=[42730], 00:09:42.108 | 99.99th=[42730] 00:09:42.108 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:09:42.108 slat (usec): min=2, max=17844, avg=85.38, stdev=574.44 00:09:42.108 clat (usec): min=2877, max=47705, avg=11491.06, stdev=4338.38 00:09:42.108 lat (usec): min=2889, max=47736, avg=11576.43, stdev=4392.91 00:09:42.108 clat percentiles (usec): 00:09:42.108 | 1.00th=[ 6128], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10028], 00:09:42.108 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:09:42.108 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12649], 95.00th=[18220], 00:09:42.108 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[41681], 00:09:42.108 | 99.99th=[47449] 00:09:42.108 bw ( KiB/s): min=22232, max=22824, per=30.79%, avg=22528.00, stdev=418.61, samples=2 00:09:42.108 iops : min= 5558, max= 5706, avg=5632.00, stdev=104.65, samples=2 00:09:42.108 lat (usec) : 750=0.01% 00:09:42.108 lat (msec) : 4=0.14%, 10=26.34%, 20=69.40%, 50=4.11% 00:09:42.108 cpu : usr=4.79%, sys=7.58%, ctx=500, majf=0, minf=1 00:09:42.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:42.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.109 issued rwts: total=5338,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.109 job2: (groupid=0, jobs=1): err= 0: pid=1792254: Mon Dec 9 17:20:08 2024 00:09:42.109 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:09:42.109 slat (nsec): min=1348, max=9635.7k, avg=98930.96, stdev=589136.64 00:09:42.109 clat (usec): min=4752, max=58811, avg=13389.70, stdev=5516.07 00:09:42.109 lat (usec): min=4758, max=58814, avg=13488.63, stdev=5531.36 00:09:42.109 clat percentiles (usec): 00:09:42.109 | 1.00th=[ 6063], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11207], 00:09:42.109 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12518], 60.00th=[13173], 00:09:42.109 | 70.00th=[13435], 80.00th=[14222], 90.00th=[16188], 95.00th=[18744], 00:09:42.109 | 99.00th=[52167], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:09:42.109 | 99.99th=[58983] 00:09:42.109 write: IOPS=4821, BW=18.8MiB/s (19.7MB/s)(19.0MiB/1007msec); 0 zone resets 00:09:42.109 slat (usec): min=2, max=9268, avg=103.58, stdev=557.89 00:09:42.109 clat (usec): min=1418, max=33623, avg=13433.93, stdev=5461.83 00:09:42.109 lat (usec): min=1487, max=33626, avg=13537.51, stdev=5499.35 00:09:42.109 clat percentiles (usec): 00:09:42.109 | 1.00th=[ 3851], 5.00th=[ 4752], 10.00th=[ 7046], 20.00th=[10290], 00:09:42.109 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12780], 60.00th=[13435], 00:09:42.109 | 70.00th=[15270], 80.00th=[16188], 90.00th=[20055], 95.00th=[23987], 00:09:42.109 | 99.00th=[31065], 99.50th=[31065], 99.90th=[33817], 99.95th=[33817], 00:09:42.109 | 99.99th=[33817] 00:09:42.109 bw ( KiB/s): min=18736, max=19088, per=25.85%, avg=18912.00, stdev=248.90, samples=2 00:09:42.109 iops : min= 4684, max= 4772, avg=4728.00, stdev=62.23, samples=2 00:09:42.109 lat (msec) : 2=0.06%, 4=0.69%, 10=13.19%, 20=79.69%, 50=5.71% 00:09:42.109 lat (msec) : 100=0.67% 00:09:42.109 cpu : usr=3.28%, sys=7.46%, ctx=424, majf=0, minf=1 00:09:42.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:42.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.109 issued rwts: total=4608,4855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.109 job3: (groupid=0, jobs=1): err= 0: pid=1792255: Mon Dec 9 17:20:08 2024 00:09:42.109 read: IOPS=3830, BW=15.0MiB/s (15.7MB/s)(15.6MiB/1044msec) 00:09:42.109 slat (nsec): min=1316, max=30192k, avg=114318.30, stdev=860504.40 00:09:42.109 clat (usec): min=4568, max=71850, avg=16176.66, stdev=11049.61 00:09:42.109 lat (usec): min=4575, max=71881, avg=16290.98, stdev=11100.22 00:09:42.109 clat percentiles (usec): 00:09:42.109 | 1.00th=[ 7767], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[10945], 00:09:42.109 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12649], 60.00th=[13304], 00:09:42.109 | 70.00th=[14746], 80.00th=[16188], 90.00th=[21365], 95.00th=[47449], 00:09:42.109 | 99.00th=[63177], 99.50th=[63177], 99.90th=[63177], 99.95th=[67634], 00:09:42.109 | 99.99th=[71828] 00:09:42.109 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:09:42.109 slat (usec): min=2, max=13704, avg=124.33, stdev=736.63 00:09:42.109 clat (usec): min=3349, max=77948, avg=16442.27, stdev=11861.51 00:09:42.109 lat (usec): min=3361, max=78668, avg=16566.59, stdev=11930.50 00:09:42.109 clat percentiles (usec): 00:09:42.109 | 1.00th=[ 6063], 5.00th=[ 8455], 10.00th=[10290], 20.00th=[10945], 00:09:42.109 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[12780], 00:09:42.109 | 70.00th=[13304], 80.00th=[18482], 90.00th=[32375], 95.00th=[44303], 00:09:42.109 | 99.00th=[70779], 99.50th=[71828], 99.90th=[78119], 99.95th=[78119], 00:09:42.109 | 99.99th=[78119] 00:09:42.109 bw ( KiB/s): min=16384, max=16384, per=22.40%, avg=16384.00, stdev= 0.00, samples=2 00:09:42.109 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:42.109 lat (msec) : 4=0.17%, 10=8.18%, 20=77.27%, 50=10.88%, 100=3.50% 00:09:42.109 cpu : usr=3.74%, sys=4.60%, ctx=422, majf=0, minf=1 00:09:42.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:42.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.109 issued rwts: total=3999,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.109 00:09:42.109 Run status group 0 (all jobs): 00:09:42.109 READ: bw=67.5MiB/s (70.8MB/s), 15.0MiB/s-20.8MiB/s (15.7MB/s-21.8MB/s), io=70.5MiB (73.9MB), run=1003-1044msec 00:09:42.109 WRITE: bw=71.4MiB/s (74.9MB/s), 15.3MiB/s-21.9MiB/s (16.1MB/s-23.0MB/s), io=74.6MiB (78.2MB), run=1003-1044msec 00:09:42.109 00:09:42.109 Disk stats (read/write): 00:09:42.109 nvme0n1: ios=3886/4096, merge=0/0, ticks=35221/32451, in_queue=67672, util=86.97% 00:09:42.109 nvme0n2: ios=4588/4608, merge=0/0, ticks=25277/24730, in_queue=50007, util=86.92% 00:09:42.109 nvme0n3: ios=4061/4096, merge=0/0, ticks=27152/22743, in_queue=49895, util=98.23% 00:09:42.109 nvme0n4: ios=3116/3372, merge=0/0, ticks=33489/39287, in_queue=72776, util=100.00% 00:09:42.109 17:20:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:42.109 17:20:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1792483 00:09:42.109 17:20:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:42.109 17:20:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:42.109 [global] 00:09:42.109 thread=1 00:09:42.109 invalidate=1 00:09:42.109 rw=read 00:09:42.109 time_based=1 00:09:42.109 runtime=10 00:09:42.109 ioengine=libaio 00:09:42.109 direct=1 00:09:42.109 bs=4096 00:09:42.109 iodepth=1 00:09:42.109 norandommap=1 00:09:42.109 numjobs=1 00:09:42.109 00:09:42.109 [job0] 00:09:42.109 filename=/dev/nvme0n1 00:09:42.109 [job1] 00:09:42.109 filename=/dev/nvme0n2 00:09:42.109 [job2] 00:09:42.109 filename=/dev/nvme0n3 00:09:42.109 [job3] 00:09:42.109 filename=/dev/nvme0n4 00:09:42.109 Could not set queue depth (nvme0n1) 00:09:42.109 Could not set queue depth (nvme0n2) 00:09:42.109 Could not set queue depth (nvme0n3) 00:09:42.109 Could not set queue depth (nvme0n4) 00:09:42.366 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.366 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.366 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.366 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.366 fio-3.35 00:09:42.366 Starting 4 threads 00:09:44.887 17:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:45.145 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=21659648, buflen=4096 00:09:45.145 fio: pid=1792626, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:45.145 17:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:45.402 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44560384, buflen=4096 00:09:45.402 fio: pid=1792625, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:45.402 17:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:45.402 17:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:45.402 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=42115072, buflen=4096 00:09:45.402 fio: pid=1792623, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:45.402 17:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:45.402 17:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:45.659 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54251520, buflen=4096 00:09:45.659 fio: pid=1792624, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:45.659 17:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:45.659 17:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:45.659 00:09:45.659 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1792623: Mon Dec 9 17:20:12 2024 00:09:45.659 read: IOPS=3290, BW=12.9MiB/s (13.5MB/s)(40.2MiB/3125msec) 00:09:45.659 slat (usec): min=6, max=11638, avg= 9.35, stdev=114.71 00:09:45.659 clat (usec): min=177, max=42004, avg=290.80, stdev=1396.59 00:09:45.659 lat (usec): min=184, max=42025, avg=300.15, stdev=1401.58 00:09:45.659 clat percentiles (usec): 00:09:45.659 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 215], 00:09:45.659 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 251], 00:09:45.659 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:09:45.659 | 99.00th=[ 338], 99.50th=[ 375], 99.90th=[40633], 99.95th=[41157], 00:09:45.659 | 99.99th=[41681] 00:09:45.659 bw ( KiB/s): min= 4239, max=17376, per=28.64%, avg=13565.17, stdev=4949.28, samples=6 00:09:45.659 iops : min= 1059, max= 4344, avg=3391.17, stdev=1237.60, samples=6 00:09:45.659 lat (usec) : 250=58.36%, 500=41.46%, 750=0.02%, 1000=0.01% 00:09:45.659 lat (msec) : 2=0.01%, 4=0.02%, 50=0.12% 00:09:45.659 cpu : usr=1.25%, sys=4.42%, ctx=10287, majf=0, minf=2 00:09:45.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.659 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.659 issued rwts: total=10283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.659 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1792624: Mon Dec 9 17:20:12 2024 00:09:45.659 read: IOPS=3951, BW=15.4MiB/s (16.2MB/s)(51.7MiB/3352msec) 00:09:45.659 slat (usec): min=6, max=13411, avg= 9.80, stdev=180.48 00:09:45.659 clat (usec): min=163, max=42771, avg=240.47, stdev=737.19 00:09:45.659 lat (usec): min=170, max=52509, avg=250.27, stdev=799.84 00:09:45.659 clat percentiles (usec): 00:09:45.659 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:09:45.659 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:09:45.659 | 70.00th=[ 237], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 281], 00:09:45.659 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 457], 99.95th=[ 668], 00:09:45.659 | 99.99th=[41681] 00:09:45.659 bw ( KiB/s): min=14760, max=18000, per=35.67%, avg=16894.67, stdev=1181.31, samples=6 00:09:45.659 iops : min= 3690, max= 4500, avg=4223.67, stdev=295.33, samples=6 00:09:45.659 lat (usec) : 250=76.20%, 500=23.71%, 750=0.04% 00:09:45.659 lat (msec) : 20=0.01%, 50=0.03% 00:09:45.659 cpu : usr=0.75%, sys=3.79%, ctx=13249, majf=0, minf=2 00:09:45.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.659 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.659 issued rwts: total=13246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.659 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1792625: Mon Dec 9 17:20:12 2024 00:09:45.659 read: IOPS=3732, BW=14.6MiB/s (15.3MB/s)(42.5MiB/2915msec) 00:09:45.659 slat (usec): min=6, max=16635, avg=10.83, stdev=209.87 00:09:45.659 clat (usec): min=184, max=40931, avg=253.79, stdev=514.28 00:09:45.659 lat (usec): min=191, max=40943, avg=264.62, stdev=556.46 00:09:45.659 clat percentiles (usec): 00:09:45.659 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 229], 00:09:45.659 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:09:45.659 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 281], 00:09:45.659 | 99.00th=[ 314], 99.50th=[ 371], 99.90th=[ 510], 99.95th=[ 586], 00:09:45.659 | 99.99th=[34866] 00:09:45.659 bw ( KiB/s): min=14192, max=15672, per=31.99%, avg=15155.20, stdev=605.17, samples=5 00:09:45.660 iops : min= 3548, max= 3918, avg=3788.80, stdev=151.29, samples=5 00:09:45.660 lat (usec) : 250=56.55%, 500=43.31%, 750=0.09% 00:09:45.660 lat (msec) : 2=0.02%, 50=0.02% 00:09:45.660 cpu : usr=0.93%, sys=4.50%, ctx=10883, majf=0, minf=2 00:09:45.660 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.660 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.660 issued rwts: total=10880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.660 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.660 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1792626: Mon Dec 9 17:20:12 2024 00:09:45.660 read: IOPS=1957, BW=7828KiB/s (8016kB/s)(20.7MiB/2702msec) 00:09:45.660 slat (nsec): min=6461, max=42373, avg=7429.48, stdev=1486.64 00:09:45.660 clat (usec): min=185, max=42021, avg=500.55, stdev=3090.12 00:09:45.660 lat (usec): min=192, max=42029, avg=507.98, stdev=3090.75 00:09:45.660 clat percentiles (usec): 00:09:45.660 | 1.00th=[ 200], 5.00th=[ 221], 10.00th=[ 233], 20.00th=[ 243], 00:09:45.660 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:09:45.660 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 334], 00:09:45.660 | 99.00th=[ 412], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:09:45.660 | 99.99th=[42206] 00:09:45.660 bw ( KiB/s): min= 128, max=14248, per=15.58%, avg=7382.40, stdev=5756.26, samples=5 00:09:45.660 iops : min= 32, max= 3562, avg=1845.60, stdev=1439.07, samples=5 00:09:45.660 lat (usec) : 250=32.84%, 500=66.53%, 750=0.02% 00:09:45.660 lat (msec) : 50=0.59% 00:09:45.660 cpu : usr=0.67%, sys=1.67%, ctx=5289, majf=0, minf=1 00:09:45.660 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.660 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.660 issued rwts: total=5289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.660 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.660 00:09:45.660 Run status group 0 (all jobs): 00:09:45.660 READ: bw=46.3MiB/s (48.5MB/s), 7828KiB/s-15.4MiB/s (8016kB/s-16.2MB/s), io=155MiB (163MB), run=2702-3352msec 00:09:45.660 00:09:45.660 Disk stats (read/write): 00:09:45.660 nvme0n1: ios=10282/0, merge=0/0, ticks=2893/0, in_queue=2893, util=95.41% 00:09:45.660 nvme0n2: ios=13103/0, merge=0/0, ticks=2922/0, in_queue=2922, util=95.36% 00:09:45.660 nvme0n3: ios=10716/0, merge=0/0, ticks=2654/0, in_queue=2654, util=95.51% 00:09:45.660 nvme0n4: ios=4980/0, merge=0/0, ticks=2546/0, in_queue=2546, util=96.45% 00:09:45.919 17:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:45.919 17:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:46.214 17:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:46.214 17:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:46.502 17:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:46.502 17:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:46.502 17:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:46.502 17:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1792483 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:46.786 nvmf hotplug test: fio failed as expected 00:09:46.786 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.059 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:47.059 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:47.059 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:47.059 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:47.059 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:47.059 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.059 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:47.059 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.059 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:47.059 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.059 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.059 rmmod nvme_tcp 00:09:47.059 rmmod nvme_fabrics 00:09:47.059 rmmod nvme_keyring 00:09:47.059 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1789605 ']' 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1789605 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1789605 ']' 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1789605 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1789605 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1789605' 00:09:47.317 killing process with pid 1789605 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1789605 00:09:47.317 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1789605 00:09:47.318 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.318 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.318 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.318 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:47.318 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:47.318 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.318 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.318 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.318 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.318 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.318 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.318 17:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.862 17:20:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:49.862 00:09:49.862 real 0m27.556s 00:09:49.862 user 1m50.300s 00:09:49.862 sys 0m8.945s 00:09:49.862 17:20:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.862 17:20:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.862 ************************************ 00:09:49.862 END TEST nvmf_fio_target 00:09:49.862 ************************************ 00:09:49.862 17:20:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:49.862 17:20:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:49.862 17:20:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.862 17:20:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.862 ************************************ 00:09:49.862 START TEST nvmf_bdevio 00:09:49.862 ************************************ 00:09:49.862 17:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:49.862 * Looking for test storage... 00:09:49.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:49.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.863 --rc genhtml_branch_coverage=1 00:09:49.863 --rc genhtml_function_coverage=1 00:09:49.863 --rc genhtml_legend=1 00:09:49.863 --rc geninfo_all_blocks=1 00:09:49.863 --rc geninfo_unexecuted_blocks=1 00:09:49.863 00:09:49.863 ' 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:49.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.863 --rc genhtml_branch_coverage=1 00:09:49.863 --rc genhtml_function_coverage=1 00:09:49.863 --rc genhtml_legend=1 00:09:49.863 --rc geninfo_all_blocks=1 00:09:49.863 --rc geninfo_unexecuted_blocks=1 00:09:49.863 00:09:49.863 ' 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:49.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.863 --rc genhtml_branch_coverage=1 00:09:49.863 --rc genhtml_function_coverage=1 00:09:49.863 --rc genhtml_legend=1 00:09:49.863 --rc geninfo_all_blocks=1 00:09:49.863 --rc geninfo_unexecuted_blocks=1 00:09:49.863 00:09:49.863 ' 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:49.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.863 --rc genhtml_branch_coverage=1 00:09:49.863 --rc genhtml_function_coverage=1 00:09:49.863 --rc genhtml_legend=1 00:09:49.863 --rc geninfo_all_blocks=1 00:09:49.863 --rc geninfo_unexecuted_blocks=1 00:09:49.863 00:09:49.863 ' 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:49.863 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:49.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:49.864 17:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:56.431 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.431 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:56.432 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:56.432 Found net devices under 0000:af:00.0: cvl_0_0 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:56.432 Found net devices under 0000:af:00.1: cvl_0_1 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.432 17:20:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:09:56.432 00:09:56.432 --- 10.0.0.2 ping statistics --- 00:09:56.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.432 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:09:56.432 00:09:56.432 --- 10.0.0.1 ping statistics --- 00:09:56.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.432 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1797015 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1797015 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1797015 ']' 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.432 [2024-12-09 17:20:22.177386] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:09:56.432 [2024-12-09 17:20:22.177434] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.432 [2024-12-09 17:20:22.257593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.432 [2024-12-09 17:20:22.298333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.432 [2024-12-09 17:20:22.298369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.432 [2024-12-09 17:20:22.298376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.432 [2024-12-09 17:20:22.298382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.432 [2024-12-09 17:20:22.298387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.432 [2024-12-09 17:20:22.299752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:56.432 [2024-12-09 17:20:22.299863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:56.432 [2024-12-09 17:20:22.299970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.432 [2024-12-09 17:20:22.299972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.432 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.432 [2024-12-09 17:20:22.432779] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.433 Malloc0 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.433 [2024-12-09 17:20:22.490804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:56.433 { 00:09:56.433 "params": { 00:09:56.433 "name": "Nvme$subsystem", 00:09:56.433 "trtype": "$TEST_TRANSPORT", 00:09:56.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:56.433 "adrfam": "ipv4", 00:09:56.433 "trsvcid": "$NVMF_PORT", 00:09:56.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:56.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:56.433 "hdgst": ${hdgst:-false}, 00:09:56.433 "ddgst": ${ddgst:-false} 00:09:56.433 }, 00:09:56.433 "method": "bdev_nvme_attach_controller" 00:09:56.433 } 00:09:56.433 EOF 00:09:56.433 )") 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:56.433 17:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:56.433 "params": { 00:09:56.433 "name": "Nvme1", 00:09:56.433 "trtype": "tcp", 00:09:56.433 "traddr": "10.0.0.2", 00:09:56.433 "adrfam": "ipv4", 00:09:56.433 "trsvcid": "4420", 00:09:56.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:56.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:56.433 "hdgst": false, 00:09:56.433 "ddgst": false 00:09:56.433 }, 00:09:56.433 "method": "bdev_nvme_attach_controller" 00:09:56.433 }' 00:09:56.433 [2024-12-09 17:20:22.541044] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:09:56.433 [2024-12-09 17:20:22.541085] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797038 ] 00:09:56.433 [2024-12-09 17:20:22.613856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.433 [2024-12-09 17:20:22.655870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.433 [2024-12-09 17:20:22.655979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.433 [2024-12-09 17:20:22.655980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.433 I/O targets: 00:09:56.433 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:56.433 00:09:56.433 00:09:56.433 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.433 http://cunit.sourceforge.net/ 00:09:56.433 00:09:56.433 00:09:56.433 Suite: bdevio tests on: Nvme1n1 00:09:56.690 Test: blockdev write read block ...passed 00:09:56.690 Test: blockdev write zeroes read block ...passed 00:09:56.690 Test: blockdev write zeroes read no split ...passed 00:09:56.690 Test: blockdev write zeroes read split ...passed 00:09:56.690 Test: blockdev write zeroes read split partial ...passed 00:09:56.690 Test: blockdev reset ...[2024-12-09 17:20:23.051567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:56.690 [2024-12-09 17:20:23.051628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1add4f0 (9): Bad file descriptor 00:09:56.690 [2024-12-09 17:20:23.193035] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:56.690 passed 00:09:56.690 Test: blockdev write read 8 blocks ...passed 00:09:56.690 Test: blockdev write read size > 128k ...passed 00:09:56.690 Test: blockdev write read invalid size ...passed 00:09:56.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:56.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:56.946 Test: blockdev write read max offset ...passed 00:09:56.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:56.946 Test: blockdev writev readv 8 blocks ...passed 00:09:56.946 Test: blockdev writev readv 30 x 1block ...passed 00:09:56.946 Test: blockdev writev readv block ...passed 00:09:56.946 Test: blockdev writev readv size > 128k ...passed 00:09:56.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:56.946 Test: blockdev comparev and writev ...[2024-12-09 17:20:23.405184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.946 [2024-12-09 17:20:23.405215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:56.946 [2024-12-09 17:20:23.405229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.946 [2024-12-09 17:20:23.405237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:56.946 [2024-12-09 17:20:23.405484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.946 [2024-12-09 17:20:23.405494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:56.946 [2024-12-09 17:20:23.405506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.946 [2024-12-09 17:20:23.405514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:56.946 [2024-12-09 17:20:23.405749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.946 [2024-12-09 17:20:23.405760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:56.946 [2024-12-09 17:20:23.405771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.946 [2024-12-09 17:20:23.405778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:56.946 [2024-12-09 17:20:23.406001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.946 [2024-12-09 17:20:23.406012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:56.947 [2024-12-09 17:20:23.406023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.947 [2024-12-09 17:20:23.406031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:56.947 passed 00:09:57.204 Test: blockdev nvme passthru rw ...passed 00:09:57.204 Test: blockdev nvme passthru vendor specific ...[2024-12-09 17:20:23.488573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:57.204 [2024-12-09 17:20:23.488596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:57.204 [2024-12-09 17:20:23.488700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:57.204 [2024-12-09 17:20:23.488710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:57.204 [2024-12-09 17:20:23.488808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:57.204 [2024-12-09 17:20:23.488818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:57.204 [2024-12-09 17:20:23.488921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:57.204 [2024-12-09 17:20:23.488931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:57.204 passed 00:09:57.204 Test: blockdev nvme admin passthru ...passed 00:09:57.204 Test: blockdev copy ...passed 00:09:57.204 00:09:57.204 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.204 suites 1 1 n/a 0 0 00:09:57.204 tests 23 23 23 0 0 00:09:57.204 asserts 152 152 152 0 n/a 00:09:57.204 00:09:57.204 Elapsed time = 1.226 seconds 00:09:57.204 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.204 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.204 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.204 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.204 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:57.204 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:57.204 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.204 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:57.204 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.204 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:57.204 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.204 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.204 rmmod nvme_tcp 00:09:57.204 rmmod nvme_fabrics 00:09:57.204 rmmod nvme_keyring 00:09:57.463 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.463 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:57.463 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:57.463 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1797015 ']' 00:09:57.463 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1797015 00:09:57.463 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1797015 ']' 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1797015 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1797015 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1797015' 00:09:57.464 killing process with pid 1797015 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1797015 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1797015 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.464 17:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.464 17:20:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.464 17:20:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.464 17:20:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.464 17:20:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.464 17:20:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.999 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.000 00:10:00.000 real 0m10.100s 00:10:00.000 user 0m10.848s 00:10:00.000 sys 0m4.995s 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.000 ************************************ 00:10:00.000 END TEST nvmf_bdevio 00:10:00.000 ************************************ 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:00.000 00:10:00.000 real 4m35.394s 00:10:00.000 user 10m28.894s 00:10:00.000 sys 1m39.147s 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.000 ************************************ 00:10:00.000 END TEST nvmf_target_core 00:10:00.000 ************************************ 00:10:00.000 17:20:26 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:00.000 17:20:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.000 17:20:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.000 17:20:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:00.000 ************************************ 00:10:00.000 START TEST nvmf_target_extra 00:10:00.000 ************************************ 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:00.000 * Looking for test storage... 00:10:00.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:00.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.000 --rc genhtml_branch_coverage=1 00:10:00.000 --rc genhtml_function_coverage=1 00:10:00.000 --rc genhtml_legend=1 00:10:00.000 --rc geninfo_all_blocks=1 00:10:00.000 --rc geninfo_unexecuted_blocks=1 00:10:00.000 00:10:00.000 ' 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:00.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.000 --rc genhtml_branch_coverage=1 00:10:00.000 --rc genhtml_function_coverage=1 00:10:00.000 --rc genhtml_legend=1 00:10:00.000 --rc geninfo_all_blocks=1 00:10:00.000 --rc geninfo_unexecuted_blocks=1 00:10:00.000 00:10:00.000 ' 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:00.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.000 --rc genhtml_branch_coverage=1 00:10:00.000 --rc genhtml_function_coverage=1 00:10:00.000 --rc genhtml_legend=1 00:10:00.000 --rc geninfo_all_blocks=1 00:10:00.000 --rc geninfo_unexecuted_blocks=1 00:10:00.000 00:10:00.000 ' 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:00.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.000 --rc genhtml_branch_coverage=1 00:10:00.000 --rc genhtml_function_coverage=1 00:10:00.000 --rc genhtml_legend=1 00:10:00.000 --rc geninfo_all_blocks=1 00:10:00.000 --rc geninfo_unexecuted_blocks=1 00:10:00.000 00:10:00.000 ' 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.000 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:00.001 ************************************ 00:10:00.001 START TEST nvmf_example 00:10:00.001 ************************************ 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:00.001 * Looking for test storage... 00:10:00.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:00.001 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:00.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.261 --rc genhtml_branch_coverage=1 00:10:00.261 --rc genhtml_function_coverage=1 00:10:00.261 --rc genhtml_legend=1 00:10:00.261 --rc geninfo_all_blocks=1 00:10:00.261 --rc geninfo_unexecuted_blocks=1 00:10:00.261 00:10:00.261 ' 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:00.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.261 --rc genhtml_branch_coverage=1 00:10:00.261 --rc genhtml_function_coverage=1 00:10:00.261 --rc genhtml_legend=1 00:10:00.261 --rc geninfo_all_blocks=1 00:10:00.261 --rc geninfo_unexecuted_blocks=1 00:10:00.261 00:10:00.261 ' 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:00.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.261 --rc genhtml_branch_coverage=1 00:10:00.261 --rc genhtml_function_coverage=1 00:10:00.261 --rc genhtml_legend=1 00:10:00.261 --rc geninfo_all_blocks=1 00:10:00.261 --rc geninfo_unexecuted_blocks=1 00:10:00.261 00:10:00.261 ' 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:00.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.261 --rc genhtml_branch_coverage=1 00:10:00.261 --rc genhtml_function_coverage=1 00:10:00.261 --rc genhtml_legend=1 00:10:00.261 --rc geninfo_all_blocks=1 00:10:00.261 --rc geninfo_unexecuted_blocks=1 00:10:00.261 00:10:00.261 ' 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.261 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.262 17:20:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.831 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:06.832 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:06.832 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:06.832 Found net devices under 0000:af:00.0: cvl_0_0 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:06.832 Found net devices under 0000:af:00.1: cvl_0_1 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:10:06.832 00:10:06.832 --- 10.0.0.2 ping statistics --- 00:10:06.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.832 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:10:06.832 00:10:06.832 --- 10.0.0.1 ping statistics --- 00:10:06.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.832 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1800810 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1800810 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1800810 ']' 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.832 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.091 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:07.092 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:19.297 Initializing NVMe Controllers 00:10:19.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:19.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:19.297 Initialization complete. Launching workers. 00:10:19.297 ======================================================== 00:10:19.297 Latency(us) 00:10:19.297 Device Information : IOPS MiB/s Average min max 00:10:19.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18362.04 71.73 3484.91 688.75 17213.75 00:10:19.297 ======================================================== 00:10:19.297 Total : 18362.04 71.73 3484.91 688.75 17213.75 00:10:19.297 00:10:19.297 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.298 rmmod nvme_tcp 00:10:19.298 rmmod nvme_fabrics 00:10:19.298 rmmod nvme_keyring 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1800810 ']' 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1800810 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1800810 ']' 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1800810 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1800810 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1800810' 00:10:19.298 killing process with pid 1800810 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1800810 00:10:19.298 17:20:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1800810 00:10:19.298 nvmf threads initialize successfully 00:10:19.298 bdev subsystem init successfully 00:10:19.298 created a nvmf target service 00:10:19.298 create targets's poll groups done 00:10:19.298 all subsystems of target started 00:10:19.298 nvmf target is running 00:10:19.298 all subsystems of target stopped 00:10:19.298 destroy targets's poll groups done 00:10:19.298 destroyed the nvmf target service 00:10:19.298 bdev subsystem finish successfully 00:10:19.298 nvmf threads destroy successfully 00:10:19.298 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:19.298 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:19.298 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:19.298 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:19.298 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:19.298 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:19.298 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:19.298 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:19.298 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:19.298 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.298 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.298 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.867 00:10:19.867 real 0m19.726s 00:10:19.867 user 0m45.881s 00:10:19.867 sys 0m5.989s 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.867 ************************************ 00:10:19.867 END TEST nvmf_example 00:10:19.867 ************************************ 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:19.867 ************************************ 00:10:19.867 START TEST nvmf_filesystem 00:10:19.867 ************************************ 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:19.867 * Looking for test storage... 00:10:19.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:19.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.867 --rc genhtml_branch_coverage=1 00:10:19.867 --rc genhtml_function_coverage=1 00:10:19.867 --rc genhtml_legend=1 00:10:19.867 --rc geninfo_all_blocks=1 00:10:19.867 --rc geninfo_unexecuted_blocks=1 00:10:19.867 00:10:19.867 ' 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:19.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.867 --rc genhtml_branch_coverage=1 00:10:19.867 --rc genhtml_function_coverage=1 00:10:19.867 --rc genhtml_legend=1 00:10:19.867 --rc geninfo_all_blocks=1 00:10:19.867 --rc geninfo_unexecuted_blocks=1 00:10:19.867 00:10:19.867 ' 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:19.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.867 --rc genhtml_branch_coverage=1 00:10:19.867 --rc genhtml_function_coverage=1 00:10:19.867 --rc genhtml_legend=1 00:10:19.867 --rc geninfo_all_blocks=1 00:10:19.867 --rc geninfo_unexecuted_blocks=1 00:10:19.867 00:10:19.867 ' 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:19.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.867 --rc genhtml_branch_coverage=1 00:10:19.867 --rc genhtml_function_coverage=1 00:10:19.867 --rc genhtml_legend=1 00:10:19.867 --rc geninfo_all_blocks=1 00:10:19.867 --rc geninfo_unexecuted_blocks=1 00:10:19.867 00:10:19.867 ' 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:19.867 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:20.130 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:20.131 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:20.131 #define SPDK_CONFIG_H 00:10:20.131 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:20.131 #define SPDK_CONFIG_APPS 1 00:10:20.131 #define SPDK_CONFIG_ARCH native 00:10:20.131 #undef SPDK_CONFIG_ASAN 00:10:20.131 #undef SPDK_CONFIG_AVAHI 00:10:20.131 #undef SPDK_CONFIG_CET 00:10:20.131 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:20.131 #define SPDK_CONFIG_COVERAGE 1 00:10:20.131 #define SPDK_CONFIG_CROSS_PREFIX 00:10:20.131 #undef SPDK_CONFIG_CRYPTO 00:10:20.131 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:20.131 #undef SPDK_CONFIG_CUSTOMOCF 00:10:20.131 #undef SPDK_CONFIG_DAOS 00:10:20.131 #define SPDK_CONFIG_DAOS_DIR 00:10:20.131 #define SPDK_CONFIG_DEBUG 1 00:10:20.131 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:20.131 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:20.131 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:20.131 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:20.131 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:20.131 #undef SPDK_CONFIG_DPDK_UADK 00:10:20.131 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:20.131 #define SPDK_CONFIG_EXAMPLES 1 00:10:20.131 #undef SPDK_CONFIG_FC 00:10:20.131 #define SPDK_CONFIG_FC_PATH 00:10:20.131 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:20.131 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:20.131 #define SPDK_CONFIG_FSDEV 1 00:10:20.131 #undef SPDK_CONFIG_FUSE 00:10:20.131 #undef SPDK_CONFIG_FUZZER 00:10:20.131 #define SPDK_CONFIG_FUZZER_LIB 00:10:20.131 #undef SPDK_CONFIG_GOLANG 00:10:20.131 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:20.131 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:20.131 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:20.131 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:20.131 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:20.131 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:20.131 #undef SPDK_CONFIG_HAVE_LZ4 00:10:20.131 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:20.131 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:20.131 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:20.131 #define SPDK_CONFIG_IDXD 1 00:10:20.131 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:20.131 #undef SPDK_CONFIG_IPSEC_MB 00:10:20.131 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:20.131 #define SPDK_CONFIG_ISAL 1 00:10:20.131 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:20.131 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:20.131 #define SPDK_CONFIG_LIBDIR 00:10:20.131 #undef SPDK_CONFIG_LTO 00:10:20.131 #define SPDK_CONFIG_MAX_LCORES 128 00:10:20.131 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:20.131 #define SPDK_CONFIG_NVME_CUSE 1 00:10:20.131 #undef SPDK_CONFIG_OCF 00:10:20.131 #define SPDK_CONFIG_OCF_PATH 00:10:20.131 #define SPDK_CONFIG_OPENSSL_PATH 00:10:20.132 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:20.132 #define SPDK_CONFIG_PGO_DIR 00:10:20.132 #undef SPDK_CONFIG_PGO_USE 00:10:20.132 #define SPDK_CONFIG_PREFIX /usr/local 00:10:20.132 #undef SPDK_CONFIG_RAID5F 00:10:20.132 #undef SPDK_CONFIG_RBD 00:10:20.132 #define SPDK_CONFIG_RDMA 1 00:10:20.132 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:20.132 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:20.132 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:20.132 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:20.132 #define SPDK_CONFIG_SHARED 1 00:10:20.132 #undef SPDK_CONFIG_SMA 00:10:20.132 #define SPDK_CONFIG_TESTS 1 00:10:20.132 #undef SPDK_CONFIG_TSAN 00:10:20.132 #define SPDK_CONFIG_UBLK 1 00:10:20.132 #define SPDK_CONFIG_UBSAN 1 00:10:20.132 #undef SPDK_CONFIG_UNIT_TESTS 00:10:20.132 #undef SPDK_CONFIG_URING 00:10:20.132 #define SPDK_CONFIG_URING_PATH 00:10:20.132 #undef SPDK_CONFIG_URING_ZNS 00:10:20.132 #undef SPDK_CONFIG_USDT 00:10:20.132 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:20.132 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:20.132 #define SPDK_CONFIG_VFIO_USER 1 00:10:20.132 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:20.132 #define SPDK_CONFIG_VHOST 1 00:10:20.132 #define SPDK_CONFIG_VIRTIO 1 00:10:20.132 #undef SPDK_CONFIG_VTUNE 00:10:20.132 #define SPDK_CONFIG_VTUNE_DIR 00:10:20.132 #define SPDK_CONFIG_WERROR 1 00:10:20.132 #define SPDK_CONFIG_WPDK_DIR 00:10:20.132 #undef SPDK_CONFIG_XNVME 00:10:20.132 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:20.132 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.133 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1803168 ]] 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1803168 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.fWw11h 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.fWw11h/tests/target /tmp/spdk.fWw11h 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:20.134 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=89562923008 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837203968 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11274280960 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50407235584 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144435200 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23007232 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=49344491520 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074110464 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:20.135 * Looking for test storage... 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=89562923008 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13488873472 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:20.135 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:20.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.136 --rc genhtml_branch_coverage=1 00:10:20.136 --rc genhtml_function_coverage=1 00:10:20.136 --rc genhtml_legend=1 00:10:20.136 --rc geninfo_all_blocks=1 00:10:20.136 --rc geninfo_unexecuted_blocks=1 00:10:20.136 00:10:20.136 ' 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:20.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.136 --rc genhtml_branch_coverage=1 00:10:20.136 --rc genhtml_function_coverage=1 00:10:20.136 --rc genhtml_legend=1 00:10:20.136 --rc geninfo_all_blocks=1 00:10:20.136 --rc geninfo_unexecuted_blocks=1 00:10:20.136 00:10:20.136 ' 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:20.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.136 --rc genhtml_branch_coverage=1 00:10:20.136 --rc genhtml_function_coverage=1 00:10:20.136 --rc genhtml_legend=1 00:10:20.136 --rc geninfo_all_blocks=1 00:10:20.136 --rc geninfo_unexecuted_blocks=1 00:10:20.136 00:10:20.136 ' 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:20.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.136 --rc genhtml_branch_coverage=1 00:10:20.136 --rc genhtml_function_coverage=1 00:10:20.136 --rc genhtml_legend=1 00:10:20.136 --rc geninfo_all_blocks=1 00:10:20.136 --rc geninfo_unexecuted_blocks=1 00:10:20.136 00:10:20.136 ' 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:20.136 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:20.137 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.137 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:20.137 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:20.137 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:20.396 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.396 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.396 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.396 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:20.396 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:20.396 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:20.396 17:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:26.968 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:26.968 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:26.968 Found net devices under 0000:af:00.0: cvl_0_0 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:26.968 Found net devices under 0000:af:00.1: cvl_0_1 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:26.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:10:26.968 00:10:26.968 --- 10.0.0.2 ping statistics --- 00:10:26.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.968 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:10:26.968 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:10:26.968 00:10:26.969 --- 10.0.0.1 ping statistics --- 00:10:26.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.969 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:26.969 ************************************ 00:10:26.969 START TEST nvmf_filesystem_no_in_capsule 00:10:26.969 ************************************ 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1806344 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1806344 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1806344 ']' 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.969 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.969 [2024-12-09 17:20:52.775818] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:10:26.969 [2024-12-09 17:20:52.775861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.969 [2024-12-09 17:20:52.853638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.969 [2024-12-09 17:20:52.892406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.969 [2024-12-09 17:20:52.892443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.969 [2024-12-09 17:20:52.892451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.969 [2024-12-09 17:20:52.892457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.969 [2024-12-09 17:20:52.892463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.969 [2024-12-09 17:20:52.893840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.969 [2024-12-09 17:20:52.893949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.969 [2024-12-09 17:20:52.894054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.969 [2024-12-09 17:20:52.894055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.226 [2024-12-09 17:20:53.660450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.226 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.484 Malloc1 00:10:27.484 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.484 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:27.484 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.484 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.484 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.484 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.485 [2024-12-09 17:20:53.826324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:27.485 { 00:10:27.485 "name": "Malloc1", 00:10:27.485 "aliases": [ 00:10:27.485 "8fc33c0e-fe1e-4de1-9f63-809f5b44ada6" 00:10:27.485 ], 00:10:27.485 "product_name": "Malloc disk", 00:10:27.485 "block_size": 512, 00:10:27.485 "num_blocks": 1048576, 00:10:27.485 "uuid": "8fc33c0e-fe1e-4de1-9f63-809f5b44ada6", 00:10:27.485 "assigned_rate_limits": { 00:10:27.485 "rw_ios_per_sec": 0, 00:10:27.485 "rw_mbytes_per_sec": 0, 00:10:27.485 "r_mbytes_per_sec": 0, 00:10:27.485 "w_mbytes_per_sec": 0 00:10:27.485 }, 00:10:27.485 "claimed": true, 00:10:27.485 "claim_type": "exclusive_write", 00:10:27.485 "zoned": false, 00:10:27.485 "supported_io_types": { 00:10:27.485 "read": true, 00:10:27.485 "write": true, 00:10:27.485 "unmap": true, 00:10:27.485 "flush": true, 00:10:27.485 "reset": true, 00:10:27.485 "nvme_admin": false, 00:10:27.485 "nvme_io": false, 00:10:27.485 "nvme_io_md": false, 00:10:27.485 "write_zeroes": true, 00:10:27.485 "zcopy": true, 00:10:27.485 "get_zone_info": false, 00:10:27.485 "zone_management": false, 00:10:27.485 "zone_append": false, 00:10:27.485 "compare": false, 00:10:27.485 "compare_and_write": false, 00:10:27.485 "abort": true, 00:10:27.485 "seek_hole": false, 00:10:27.485 "seek_data": false, 00:10:27.485 "copy": true, 00:10:27.485 "nvme_iov_md": false 00:10:27.485 }, 00:10:27.485 "memory_domains": [ 00:10:27.485 { 00:10:27.485 "dma_device_id": "system", 00:10:27.485 "dma_device_type": 1 00:10:27.485 }, 00:10:27.485 { 00:10:27.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.485 "dma_device_type": 2 00:10:27.485 } 00:10:27.485 ], 00:10:27.485 "driver_specific": {} 00:10:27.485 } 00:10:27.485 ]' 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:27.485 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:28.857 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:28.857 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:28.857 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.857 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:28.857 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:30.754 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:31.012 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:31.576 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:32.509 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:32.509 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:32.509 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:32.509 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.509 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.509 ************************************ 00:10:32.509 START TEST filesystem_ext4 00:10:32.509 ************************************ 00:10:32.509 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:32.509 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:32.509 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:32.509 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:32.509 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:32.509 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:32.509 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:32.509 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:32.509 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:32.509 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:32.509 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:32.509 mke2fs 1.47.0 (5-Feb-2023) 00:10:32.766 Discarding device blocks: 0/522240 done 00:10:32.766 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:32.766 Filesystem UUID: a7a291c2-2914-4485-96ec-7e3f5e1f94dd 00:10:32.766 Superblock backups stored on blocks: 00:10:32.766 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:32.766 00:10:32.766 Allocating group tables: 0/64 done 00:10:32.766 Writing inode tables: 0/64 done 00:10:35.289 Creating journal (8192 blocks): done 00:10:37.593 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:10:37.593 00:10:37.593 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:37.593 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:44.145 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1806344 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:44.146 00:10:44.146 real 0m10.555s 00:10:44.146 user 0m0.026s 00:10:44.146 sys 0m0.077s 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:44.146 ************************************ 00:10:44.146 END TEST filesystem_ext4 00:10:44.146 ************************************ 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.146 ************************************ 00:10:44.146 START TEST filesystem_btrfs 00:10:44.146 ************************************ 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:44.146 btrfs-progs v6.8.1 00:10:44.146 See https://btrfs.readthedocs.io for more information. 00:10:44.146 00:10:44.146 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:44.146 NOTE: several default settings have changed in version 5.15, please make sure 00:10:44.146 this does not affect your deployments: 00:10:44.146 - DUP for metadata (-m dup) 00:10:44.146 - enabled no-holes (-O no-holes) 00:10:44.146 - enabled free-space-tree (-R free-space-tree) 00:10:44.146 00:10:44.146 Label: (null) 00:10:44.146 UUID: efcb1db3-bbba-4386-86cd-1e764589dd71 00:10:44.146 Node size: 16384 00:10:44.146 Sector size: 4096 (CPU page size: 4096) 00:10:44.146 Filesystem size: 510.00MiB 00:10:44.146 Block group profiles: 00:10:44.146 Data: single 8.00MiB 00:10:44.146 Metadata: DUP 32.00MiB 00:10:44.146 System: DUP 8.00MiB 00:10:44.146 SSD detected: yes 00:10:44.146 Zoned device: no 00:10:44.146 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:44.146 Checksum: crc32c 00:10:44.146 Number of devices: 1 00:10:44.146 Devices: 00:10:44.146 ID SIZE PATH 00:10:44.146 1 510.00MiB /dev/nvme0n1p1 00:10:44.146 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:44.146 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1806344 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:44.404 00:10:44.404 real 0m1.195s 00:10:44.404 user 0m0.026s 00:10:44.404 sys 0m0.115s 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:44.404 ************************************ 00:10:44.404 END TEST filesystem_btrfs 00:10:44.404 ************************************ 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.404 ************************************ 00:10:44.404 START TEST filesystem_xfs 00:10:44.404 ************************************ 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:44.404 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:44.662 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:44.662 = sectsz=512 attr=2, projid32bit=1 00:10:44.662 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:44.662 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:44.662 data = bsize=4096 blocks=130560, imaxpct=25 00:10:44.662 = sunit=0 swidth=0 blks 00:10:44.662 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:44.662 log =internal log bsize=4096 blocks=16384, version=2 00:10:44.662 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:44.662 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:45.227 Discarding blocks...Done. 00:10:45.227 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:45.227 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1806344 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:47.124 00:10:47.124 real 0m2.614s 00:10:47.124 user 0m0.024s 00:10:47.124 sys 0m0.073s 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:47.124 ************************************ 00:10:47.124 END TEST filesystem_xfs 00:10:47.124 ************************************ 00:10:47.124 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:47.381 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:47.381 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1806344 00:10:47.639 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1806344 ']' 00:10:47.640 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1806344 00:10:47.640 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:47.640 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.640 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1806344 00:10:47.640 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.640 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.640 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1806344' 00:10:47.640 killing process with pid 1806344 00:10:47.640 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1806344 00:10:47.640 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1806344 00:10:47.898 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:47.898 00:10:47.898 real 0m21.619s 00:10:47.898 user 1m25.333s 00:10:47.898 sys 0m1.534s 00:10:47.898 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.898 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.898 ************************************ 00:10:47.898 END TEST nvmf_filesystem_no_in_capsule 00:10:47.898 ************************************ 00:10:47.898 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:47.898 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.898 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.898 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.898 ************************************ 00:10:47.898 START TEST nvmf_filesystem_in_capsule 00:10:47.898 ************************************ 00:10:47.898 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:47.898 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:47.898 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:47.898 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.899 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.899 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.899 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1810674 00:10:47.899 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1810674 00:10:47.899 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.899 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1810674 ']' 00:10:47.899 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.899 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.899 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.899 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.899 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.157 [2024-12-09 17:21:14.462121] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:10:48.157 [2024-12-09 17:21:14.462160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.157 [2024-12-09 17:21:14.538067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.157 [2024-12-09 17:21:14.578346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.157 [2024-12-09 17:21:14.578383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.157 [2024-12-09 17:21:14.578390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.157 [2024-12-09 17:21:14.578396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.157 [2024-12-09 17:21:14.578401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.157 [2024-12-09 17:21:14.579818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.157 [2024-12-09 17:21:14.579928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.157 [2024-12-09 17:21:14.580037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.157 [2024-12-09 17:21:14.580038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.157 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.157 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:48.157 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:48.157 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.157 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.415 [2024-12-09 17:21:14.716430] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.415 Malloc1 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.415 [2024-12-09 17:21:14.873357] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:48.415 { 00:10:48.415 "name": "Malloc1", 00:10:48.415 "aliases": [ 00:10:48.415 "cf03b215-48ad-43ac-94bc-856b341c5750" 00:10:48.415 ], 00:10:48.415 "product_name": "Malloc disk", 00:10:48.415 "block_size": 512, 00:10:48.415 "num_blocks": 1048576, 00:10:48.415 "uuid": "cf03b215-48ad-43ac-94bc-856b341c5750", 00:10:48.415 "assigned_rate_limits": { 00:10:48.415 "rw_ios_per_sec": 0, 00:10:48.415 "rw_mbytes_per_sec": 0, 00:10:48.415 "r_mbytes_per_sec": 0, 00:10:48.415 "w_mbytes_per_sec": 0 00:10:48.415 }, 00:10:48.415 "claimed": true, 00:10:48.415 "claim_type": "exclusive_write", 00:10:48.415 "zoned": false, 00:10:48.415 "supported_io_types": { 00:10:48.415 "read": true, 00:10:48.415 "write": true, 00:10:48.415 "unmap": true, 00:10:48.415 "flush": true, 00:10:48.415 "reset": true, 00:10:48.415 "nvme_admin": false, 00:10:48.415 "nvme_io": false, 00:10:48.415 "nvme_io_md": false, 00:10:48.415 "write_zeroes": true, 00:10:48.415 "zcopy": true, 00:10:48.415 "get_zone_info": false, 00:10:48.415 "zone_management": false, 00:10:48.415 "zone_append": false, 00:10:48.415 "compare": false, 00:10:48.415 "compare_and_write": false, 00:10:48.415 "abort": true, 00:10:48.415 "seek_hole": false, 00:10:48.415 "seek_data": false, 00:10:48.415 "copy": true, 00:10:48.415 "nvme_iov_md": false 00:10:48.415 }, 00:10:48.415 "memory_domains": [ 00:10:48.415 { 00:10:48.415 "dma_device_id": "system", 00:10:48.415 "dma_device_type": 1 00:10:48.415 }, 00:10:48.415 { 00:10:48.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.415 "dma_device_type": 2 00:10:48.415 } 00:10:48.415 ], 00:10:48.415 "driver_specific": {} 00:10:48.415 } 00:10:48.415 ]' 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:48.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:48.673 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:48.673 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:48.673 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:48.673 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:48.673 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.605 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.605 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:49.605 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.605 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:49.605 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:52.135 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:52.394 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.327 ************************************ 00:10:53.327 START TEST filesystem_in_capsule_ext4 00:10:53.327 ************************************ 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:53.327 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:53.327 mke2fs 1.47.0 (5-Feb-2023) 00:10:53.584 Discarding device blocks: 0/522240 done 00:10:53.584 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:53.584 Filesystem UUID: b031d734-fff7-437b-afd3-0692fd9cdf65 00:10:53.584 Superblock backups stored on blocks: 00:10:53.584 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:53.584 00:10:53.584 Allocating group tables: 0/64 done 00:10:53.584 Writing inode tables: 0/64 done 00:10:56.225 Creating journal (8192 blocks): done 00:10:57.157 Writing superblocks and filesystem accounting information: 0/64 done 00:10:57.157 00:10:57.158 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:57.158 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1810674 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.712 00:11:03.712 real 0m9.689s 00:11:03.712 user 0m0.031s 00:11:03.712 sys 0m0.072s 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:03.712 ************************************ 00:11:03.712 END TEST filesystem_in_capsule_ext4 00:11:03.712 ************************************ 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.712 ************************************ 00:11:03.712 START TEST filesystem_in_capsule_btrfs 00:11:03.712 ************************************ 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:03.712 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:03.712 btrfs-progs v6.8.1 00:11:03.712 See https://btrfs.readthedocs.io for more information. 00:11:03.712 00:11:03.712 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:03.712 NOTE: several default settings have changed in version 5.15, please make sure 00:11:03.712 this does not affect your deployments: 00:11:03.712 - DUP for metadata (-m dup) 00:11:03.713 - enabled no-holes (-O no-holes) 00:11:03.713 - enabled free-space-tree (-R free-space-tree) 00:11:03.713 00:11:03.713 Label: (null) 00:11:03.713 UUID: 0dc26ee6-950d-4d5e-90f3-03e053f097b0 00:11:03.713 Node size: 16384 00:11:03.713 Sector size: 4096 (CPU page size: 4096) 00:11:03.713 Filesystem size: 510.00MiB 00:11:03.713 Block group profiles: 00:11:03.713 Data: single 8.00MiB 00:11:03.713 Metadata: DUP 32.00MiB 00:11:03.713 System: DUP 8.00MiB 00:11:03.713 SSD detected: yes 00:11:03.713 Zoned device: no 00:11:03.713 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:03.713 Checksum: crc32c 00:11:03.713 Number of devices: 1 00:11:03.713 Devices: 00:11:03.713 ID SIZE PATH 00:11:03.713 1 510.00MiB /dev/nvme0n1p1 00:11:03.713 00:11:03.713 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:03.713 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:03.713 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:03.713 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1810674 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.713 00:11:03.713 real 0m0.548s 00:11:03.713 user 0m0.027s 00:11:03.713 sys 0m0.108s 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:03.713 ************************************ 00:11:03.713 END TEST filesystem_in_capsule_btrfs 00:11:03.713 ************************************ 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.713 ************************************ 00:11:03.713 START TEST filesystem_in_capsule_xfs 00:11:03.713 ************************************ 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:03.713 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:03.713 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:03.713 = sectsz=512 attr=2, projid32bit=1 00:11:03.713 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:03.713 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:03.713 data = bsize=4096 blocks=130560, imaxpct=25 00:11:03.713 = sunit=0 swidth=0 blks 00:11:03.713 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:03.713 log =internal log bsize=4096 blocks=16384, version=2 00:11:03.713 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:03.713 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:04.645 Discarding blocks...Done. 00:11:04.645 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:04.645 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1810674 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.173 00:11:07.173 real 0m3.124s 00:11:07.173 user 0m0.018s 00:11:07.173 sys 0m0.082s 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:07.173 ************************************ 00:11:07.173 END TEST filesystem_in_capsule_xfs 00:11:07.173 ************************************ 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1810674 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1810674 ']' 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1810674 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1810674 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.173 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1810674' 00:11:07.173 killing process with pid 1810674 00:11:07.174 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1810674 00:11:07.174 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1810674 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:07.433 00:11:07.433 real 0m19.377s 00:11:07.433 user 1m16.317s 00:11:07.433 sys 0m1.457s 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.433 ************************************ 00:11:07.433 END TEST nvmf_filesystem_in_capsule 00:11:07.433 ************************************ 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.433 rmmod nvme_tcp 00:11:07.433 rmmod nvme_fabrics 00:11:07.433 rmmod nvme_keyring 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.433 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.968 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.968 00:11:09.968 real 0m49.734s 00:11:09.968 user 2m43.712s 00:11:09.968 sys 0m7.713s 00:11:09.968 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.968 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:09.968 ************************************ 00:11:09.968 END TEST nvmf_filesystem 00:11:09.968 ************************************ 00:11:09.968 17:21:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:09.968 17:21:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.968 17:21:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.968 17:21:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.968 ************************************ 00:11:09.968 START TEST nvmf_target_discovery 00:11:09.968 ************************************ 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:09.968 * Looking for test storage... 00:11:09.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.968 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:09.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.969 --rc genhtml_branch_coverage=1 00:11:09.969 --rc genhtml_function_coverage=1 00:11:09.969 --rc genhtml_legend=1 00:11:09.969 --rc geninfo_all_blocks=1 00:11:09.969 --rc geninfo_unexecuted_blocks=1 00:11:09.969 00:11:09.969 ' 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:09.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.969 --rc genhtml_branch_coverage=1 00:11:09.969 --rc genhtml_function_coverage=1 00:11:09.969 --rc genhtml_legend=1 00:11:09.969 --rc geninfo_all_blocks=1 00:11:09.969 --rc geninfo_unexecuted_blocks=1 00:11:09.969 00:11:09.969 ' 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:09.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.969 --rc genhtml_branch_coverage=1 00:11:09.969 --rc genhtml_function_coverage=1 00:11:09.969 --rc genhtml_legend=1 00:11:09.969 --rc geninfo_all_blocks=1 00:11:09.969 --rc geninfo_unexecuted_blocks=1 00:11:09.969 00:11:09.969 ' 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:09.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.969 --rc genhtml_branch_coverage=1 00:11:09.969 --rc genhtml_function_coverage=1 00:11:09.969 --rc genhtml_legend=1 00:11:09.969 --rc geninfo_all_blocks=1 00:11:09.969 --rc geninfo_unexecuted_blocks=1 00:11:09.969 00:11:09.969 ' 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.969 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:16.539 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.539 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:16.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:16.540 Found net devices under 0000:af:00.0: cvl_0_0 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:16.540 Found net devices under 0000:af:00.1: cvl_0_1 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.540 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:16.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:11:16.540 00:11:16.540 --- 10.0.0.2 ping statistics --- 00:11:16.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.540 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:11:16.540 00:11:16.540 --- 10.0.0.1 ping statistics --- 00:11:16.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.540 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1817503 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1817503 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1817503 ']' 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.540 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.540 [2024-12-09 17:21:42.245082] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:11:16.540 [2024-12-09 17:21:42.245124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.540 [2024-12-09 17:21:42.320761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.540 [2024-12-09 17:21:42.361780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.540 [2024-12-09 17:21:42.361814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.540 [2024-12-09 17:21:42.361821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.540 [2024-12-09 17:21:42.361827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.540 [2024-12-09 17:21:42.361832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.540 [2024-12-09 17:21:42.363308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.540 [2024-12-09 17:21:42.363418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.540 [2024-12-09 17:21:42.363498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.540 [2024-12-09 17:21:42.363499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.799 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.799 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:16.799 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.799 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.799 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.799 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 [2024-12-09 17:21:43.134834] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 Null1 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 [2024-12-09 17:21:43.187275] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 Null2 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 Null3 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 Null4 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.800 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:17.059 00:11:17.059 Discovery Log Number of Records 6, Generation counter 6 00:11:17.059 =====Discovery Log Entry 0====== 00:11:17.059 trtype: tcp 00:11:17.059 adrfam: ipv4 00:11:17.059 subtype: current discovery subsystem 00:11:17.059 treq: not required 00:11:17.059 portid: 0 00:11:17.059 trsvcid: 4420 00:11:17.059 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:17.059 traddr: 10.0.0.2 00:11:17.059 eflags: explicit discovery connections, duplicate discovery information 00:11:17.059 sectype: none 00:11:17.059 =====Discovery Log Entry 1====== 00:11:17.059 trtype: tcp 00:11:17.059 adrfam: ipv4 00:11:17.059 subtype: nvme subsystem 00:11:17.059 treq: not required 00:11:17.059 portid: 0 00:11:17.059 trsvcid: 4420 00:11:17.059 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:17.059 traddr: 10.0.0.2 00:11:17.059 eflags: none 00:11:17.059 sectype: none 00:11:17.059 =====Discovery Log Entry 2====== 00:11:17.059 trtype: tcp 00:11:17.059 adrfam: ipv4 00:11:17.059 subtype: nvme subsystem 00:11:17.059 treq: not required 00:11:17.059 portid: 0 00:11:17.059 trsvcid: 4420 00:11:17.059 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:17.059 traddr: 10.0.0.2 00:11:17.059 eflags: none 00:11:17.059 sectype: none 00:11:17.059 =====Discovery Log Entry 3====== 00:11:17.059 trtype: tcp 00:11:17.059 adrfam: ipv4 00:11:17.060 subtype: nvme subsystem 00:11:17.060 treq: not required 00:11:17.060 portid: 0 00:11:17.060 trsvcid: 4420 00:11:17.060 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:17.060 traddr: 10.0.0.2 00:11:17.060 eflags: none 00:11:17.060 sectype: none 00:11:17.060 =====Discovery Log Entry 4====== 00:11:17.060 trtype: tcp 00:11:17.060 adrfam: ipv4 00:11:17.060 subtype: nvme subsystem 00:11:17.060 treq: not required 00:11:17.060 portid: 0 00:11:17.060 trsvcid: 4420 00:11:17.060 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:17.060 traddr: 10.0.0.2 00:11:17.060 eflags: none 00:11:17.060 sectype: none 00:11:17.060 =====Discovery Log Entry 5====== 00:11:17.060 trtype: tcp 00:11:17.060 adrfam: ipv4 00:11:17.060 subtype: discovery subsystem referral 00:11:17.060 treq: not required 00:11:17.060 portid: 0 00:11:17.060 trsvcid: 4430 00:11:17.060 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:17.060 traddr: 10.0.0.2 00:11:17.060 eflags: none 00:11:17.060 sectype: none 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:17.060 Perform nvmf subsystem discovery via RPC 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.060 [ 00:11:17.060 { 00:11:17.060 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:17.060 "subtype": "Discovery", 00:11:17.060 "listen_addresses": [ 00:11:17.060 { 00:11:17.060 "trtype": "TCP", 00:11:17.060 "adrfam": "IPv4", 00:11:17.060 "traddr": "10.0.0.2", 00:11:17.060 "trsvcid": "4420" 00:11:17.060 } 00:11:17.060 ], 00:11:17.060 "allow_any_host": true, 00:11:17.060 "hosts": [] 00:11:17.060 }, 00:11:17.060 { 00:11:17.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.060 "subtype": "NVMe", 00:11:17.060 "listen_addresses": [ 00:11:17.060 { 00:11:17.060 "trtype": "TCP", 00:11:17.060 "adrfam": "IPv4", 00:11:17.060 "traddr": "10.0.0.2", 00:11:17.060 "trsvcid": "4420" 00:11:17.060 } 00:11:17.060 ], 00:11:17.060 "allow_any_host": true, 00:11:17.060 "hosts": [], 00:11:17.060 "serial_number": "SPDK00000000000001", 00:11:17.060 "model_number": "SPDK bdev Controller", 00:11:17.060 "max_namespaces": 32, 00:11:17.060 "min_cntlid": 1, 00:11:17.060 "max_cntlid": 65519, 00:11:17.060 "namespaces": [ 00:11:17.060 { 00:11:17.060 "nsid": 1, 00:11:17.060 "bdev_name": "Null1", 00:11:17.060 "name": "Null1", 00:11:17.060 "nguid": "8DA61B670DB34A1BA2A86B7A5EE2F6C4", 00:11:17.060 "uuid": "8da61b67-0db3-4a1b-a2a8-6b7a5ee2f6c4" 00:11:17.060 } 00:11:17.060 ] 00:11:17.060 }, 00:11:17.060 { 00:11:17.060 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:17.060 "subtype": "NVMe", 00:11:17.060 "listen_addresses": [ 00:11:17.060 { 00:11:17.060 "trtype": "TCP", 00:11:17.060 "adrfam": "IPv4", 00:11:17.060 "traddr": "10.0.0.2", 00:11:17.060 "trsvcid": "4420" 00:11:17.060 } 00:11:17.060 ], 00:11:17.060 "allow_any_host": true, 00:11:17.060 "hosts": [], 00:11:17.060 "serial_number": "SPDK00000000000002", 00:11:17.060 "model_number": "SPDK bdev Controller", 00:11:17.060 "max_namespaces": 32, 00:11:17.060 "min_cntlid": 1, 00:11:17.060 "max_cntlid": 65519, 00:11:17.060 "namespaces": [ 00:11:17.060 { 00:11:17.060 "nsid": 1, 00:11:17.060 "bdev_name": "Null2", 00:11:17.060 "name": "Null2", 00:11:17.060 "nguid": "C7CE215BB43B46DCBF6620B4B8B492E9", 00:11:17.060 "uuid": "c7ce215b-b43b-46dc-bf66-20b4b8b492e9" 00:11:17.060 } 00:11:17.060 ] 00:11:17.060 }, 00:11:17.060 { 00:11:17.060 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:17.060 "subtype": "NVMe", 00:11:17.060 "listen_addresses": [ 00:11:17.060 { 00:11:17.060 "trtype": "TCP", 00:11:17.060 "adrfam": "IPv4", 00:11:17.060 "traddr": "10.0.0.2", 00:11:17.060 "trsvcid": "4420" 00:11:17.060 } 00:11:17.060 ], 00:11:17.060 "allow_any_host": true, 00:11:17.060 "hosts": [], 00:11:17.060 "serial_number": "SPDK00000000000003", 00:11:17.060 "model_number": "SPDK bdev Controller", 00:11:17.060 "max_namespaces": 32, 00:11:17.060 "min_cntlid": 1, 00:11:17.060 "max_cntlid": 65519, 00:11:17.060 "namespaces": [ 00:11:17.060 { 00:11:17.060 "nsid": 1, 00:11:17.060 "bdev_name": "Null3", 00:11:17.060 "name": "Null3", 00:11:17.060 "nguid": "53E4DA3B40A5432CB8A5604D5892778B", 00:11:17.060 "uuid": "53e4da3b-40a5-432c-b8a5-604d5892778b" 00:11:17.060 } 00:11:17.060 ] 00:11:17.060 }, 00:11:17.060 { 00:11:17.060 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:17.060 "subtype": "NVMe", 00:11:17.060 "listen_addresses": [ 00:11:17.060 { 00:11:17.060 "trtype": "TCP", 00:11:17.060 "adrfam": "IPv4", 00:11:17.060 "traddr": "10.0.0.2", 00:11:17.060 "trsvcid": "4420" 00:11:17.060 } 00:11:17.060 ], 00:11:17.060 "allow_any_host": true, 00:11:17.060 "hosts": [], 00:11:17.060 "serial_number": "SPDK00000000000004", 00:11:17.060 "model_number": "SPDK bdev Controller", 00:11:17.060 "max_namespaces": 32, 00:11:17.060 "min_cntlid": 1, 00:11:17.060 "max_cntlid": 65519, 00:11:17.060 "namespaces": [ 00:11:17.060 { 00:11:17.060 "nsid": 1, 00:11:17.060 "bdev_name": "Null4", 00:11:17.060 "name": "Null4", 00:11:17.060 "nguid": "99CCCB21F3AF4B208B7D90082D610E98", 00:11:17.060 "uuid": "99cccb21-f3af-4b20-8b7d-90082d610e98" 00:11:17.060 } 00:11:17.060 ] 00:11:17.060 } 00:11:17.060 ] 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.060 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.061 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.319 rmmod nvme_tcp 00:11:17.319 rmmod nvme_fabrics 00:11:17.319 rmmod nvme_keyring 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1817503 ']' 00:11:17.319 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1817503 00:11:17.320 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1817503 ']' 00:11:17.320 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1817503 00:11:17.320 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:17.320 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.320 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1817503 00:11:17.320 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.320 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.320 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1817503' 00:11:17.320 killing process with pid 1817503 00:11:17.320 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1817503 00:11:17.320 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1817503 00:11:17.579 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.579 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.579 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.579 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:17.579 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:17.579 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.579 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.579 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.579 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.579 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.579 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.579 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.486 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.486 00:11:19.486 real 0m9.940s 00:11:19.486 user 0m8.190s 00:11:19.486 sys 0m4.872s 00:11:19.486 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.486 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.486 ************************************ 00:11:19.486 END TEST nvmf_target_discovery 00:11:19.486 ************************************ 00:11:19.486 17:21:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:19.486 17:21:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.486 17:21:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.486 17:21:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.746 ************************************ 00:11:19.746 START TEST nvmf_referrals 00:11:19.746 ************************************ 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:19.746 * Looking for test storage... 00:11:19.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.746 --rc genhtml_branch_coverage=1 00:11:19.746 --rc genhtml_function_coverage=1 00:11:19.746 --rc genhtml_legend=1 00:11:19.746 --rc geninfo_all_blocks=1 00:11:19.746 --rc geninfo_unexecuted_blocks=1 00:11:19.746 00:11:19.746 ' 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.746 --rc genhtml_branch_coverage=1 00:11:19.746 --rc genhtml_function_coverage=1 00:11:19.746 --rc genhtml_legend=1 00:11:19.746 --rc geninfo_all_blocks=1 00:11:19.746 --rc geninfo_unexecuted_blocks=1 00:11:19.746 00:11:19.746 ' 00:11:19.746 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.746 --rc genhtml_branch_coverage=1 00:11:19.746 --rc genhtml_function_coverage=1 00:11:19.746 --rc genhtml_legend=1 00:11:19.746 --rc geninfo_all_blocks=1 00:11:19.746 --rc geninfo_unexecuted_blocks=1 00:11:19.746 00:11:19.747 ' 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.747 --rc genhtml_branch_coverage=1 00:11:19.747 --rc genhtml_function_coverage=1 00:11:19.747 --rc genhtml_legend=1 00:11:19.747 --rc geninfo_all_blocks=1 00:11:19.747 --rc geninfo_unexecuted_blocks=1 00:11:19.747 00:11:19.747 ' 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.747 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.324 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:26.325 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:26.325 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:26.325 Found net devices under 0000:af:00.0: cvl_0_0 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:26.325 Found net devices under 0000:af:00.1: cvl_0_1 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.325 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:11:26.325 00:11:26.325 --- 10.0.0.2 ping statistics --- 00:11:26.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.325 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:11:26.325 00:11:26.325 --- 10.0.0.1 ping statistics --- 00:11:26.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.325 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1821219 00:11:26.325 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.326 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1821219 00:11:26.326 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1821219 ']' 00:11:26.326 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.326 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.326 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.326 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.326 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.326 [2024-12-09 17:21:52.302796] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:11:26.326 [2024-12-09 17:21:52.302850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.326 [2024-12-09 17:21:52.386672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.326 [2024-12-09 17:21:52.428434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.326 [2024-12-09 17:21:52.428470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.326 [2024-12-09 17:21:52.428477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.326 [2024-12-09 17:21:52.428482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.326 [2024-12-09 17:21:52.428487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.326 [2024-12-09 17:21:52.429765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.326 [2024-12-09 17:21:52.429875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.326 [2024-12-09 17:21:52.429982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.326 [2024-12-09 17:21:52.429983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.893 [2024-12-09 17:21:53.177317] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.893 [2024-12-09 17:21:53.202313] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:26.893 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:26.894 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:27.153 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:27.411 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:27.411 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:27.411 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:27.411 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.411 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.411 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.411 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:27.411 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.411 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.411 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:27.412 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:27.670 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:27.670 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:27.670 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:27.670 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:27.670 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:27.670 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:27.670 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:27.670 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:27.670 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:27.670 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:27.670 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:27.671 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:27.671 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:27.929 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:28.188 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:28.188 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:28.188 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:28.188 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:28.188 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:28.188 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:28.188 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:28.446 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:28.705 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:28.705 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:28.705 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:28.705 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:28.705 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:28.705 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:28.705 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.705 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:28.705 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.705 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.705 rmmod nvme_tcp 00:11:28.705 rmmod nvme_fabrics 00:11:28.705 rmmod nvme_keyring 00:11:28.705 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1821219 ']' 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1821219 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1821219 ']' 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1821219 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1821219 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1821219' 00:11:28.964 killing process with pid 1821219 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1821219 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1821219 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.964 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:31.501 00:11:31.501 real 0m11.483s 00:11:31.501 user 0m14.900s 00:11:31.501 sys 0m5.197s 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.501 ************************************ 00:11:31.501 END TEST nvmf_referrals 00:11:31.501 ************************************ 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:31.501 ************************************ 00:11:31.501 START TEST nvmf_connect_disconnect 00:11:31.501 ************************************ 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:31.501 * Looking for test storage... 00:11:31.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.501 --rc genhtml_branch_coverage=1 00:11:31.501 --rc genhtml_function_coverage=1 00:11:31.501 --rc genhtml_legend=1 00:11:31.501 --rc geninfo_all_blocks=1 00:11:31.501 --rc geninfo_unexecuted_blocks=1 00:11:31.501 00:11:31.501 ' 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.501 --rc genhtml_branch_coverage=1 00:11:31.501 --rc genhtml_function_coverage=1 00:11:31.501 --rc genhtml_legend=1 00:11:31.501 --rc geninfo_all_blocks=1 00:11:31.501 --rc geninfo_unexecuted_blocks=1 00:11:31.501 00:11:31.501 ' 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.501 --rc genhtml_branch_coverage=1 00:11:31.501 --rc genhtml_function_coverage=1 00:11:31.501 --rc genhtml_legend=1 00:11:31.501 --rc geninfo_all_blocks=1 00:11:31.501 --rc geninfo_unexecuted_blocks=1 00:11:31.501 00:11:31.501 ' 00:11:31.501 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.501 --rc genhtml_branch_coverage=1 00:11:31.501 --rc genhtml_function_coverage=1 00:11:31.501 --rc genhtml_legend=1 00:11:31.502 --rc geninfo_all_blocks=1 00:11:31.502 --rc geninfo_unexecuted_blocks=1 00:11:31.502 00:11:31.502 ' 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.502 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.073 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:38.074 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:38.074 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:38.074 Found net devices under 0000:af:00.0: cvl_0_0 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:38.074 Found net devices under 0000:af:00.1: cvl_0_1 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:11:38.074 00:11:38.074 --- 10.0.0.2 ping statistics --- 00:11:38.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.074 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:11:38.074 00:11:38.074 --- 10.0.0.1 ping statistics --- 00:11:38.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.074 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1825309 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1825309 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1825309 ']' 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.074 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.075 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.075 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:38.075 [2024-12-09 17:22:03.785886] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:11:38.075 [2024-12-09 17:22:03.785939] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.075 [2024-12-09 17:22:03.849963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.075 [2024-12-09 17:22:03.890111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.075 [2024-12-09 17:22:03.890151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.075 [2024-12-09 17:22:03.890157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.075 [2024-12-09 17:22:03.890164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.075 [2024-12-09 17:22:03.890190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.075 [2024-12-09 17:22:03.891519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.075 [2024-12-09 17:22:03.891651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.075 [2024-12-09 17:22:03.891759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.075 [2024-12-09 17:22:03.891760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.075 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.075 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:38.075 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:38.075 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.075 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:38.075 [2024-12-09 17:22:04.036454] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:38.075 [2024-12-09 17:22:04.103058] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:38.075 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:41.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.004 rmmod nvme_tcp 00:11:54.004 rmmod nvme_fabrics 00:11:54.004 rmmod nvme_keyring 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1825309 ']' 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1825309 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1825309 ']' 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1825309 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1825309 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1825309' 00:11:54.004 killing process with pid 1825309 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1825309 00:11:54.004 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1825309 00:11:54.263 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.263 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.263 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.263 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:54.263 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:54.263 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.263 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.263 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.263 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.263 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.263 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.263 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.169 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.169 00:11:56.169 real 0m25.067s 00:11:56.169 user 1m8.023s 00:11:56.169 sys 0m5.810s 00:11:56.169 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.169 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.169 ************************************ 00:11:56.169 END TEST nvmf_connect_disconnect 00:11:56.169 ************************************ 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.429 ************************************ 00:11:56.429 START TEST nvmf_multitarget 00:11:56.429 ************************************ 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:56.429 * Looking for test storage... 00:11:56.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:56.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.429 --rc genhtml_branch_coverage=1 00:11:56.429 --rc genhtml_function_coverage=1 00:11:56.429 --rc genhtml_legend=1 00:11:56.429 --rc geninfo_all_blocks=1 00:11:56.429 --rc geninfo_unexecuted_blocks=1 00:11:56.429 00:11:56.429 ' 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:56.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.429 --rc genhtml_branch_coverage=1 00:11:56.429 --rc genhtml_function_coverage=1 00:11:56.429 --rc genhtml_legend=1 00:11:56.429 --rc geninfo_all_blocks=1 00:11:56.429 --rc geninfo_unexecuted_blocks=1 00:11:56.429 00:11:56.429 ' 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:56.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.429 --rc genhtml_branch_coverage=1 00:11:56.429 --rc genhtml_function_coverage=1 00:11:56.429 --rc genhtml_legend=1 00:11:56.429 --rc geninfo_all_blocks=1 00:11:56.429 --rc geninfo_unexecuted_blocks=1 00:11:56.429 00:11:56.429 ' 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:56.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.429 --rc genhtml_branch_coverage=1 00:11:56.429 --rc genhtml_function_coverage=1 00:11:56.429 --rc genhtml_legend=1 00:11:56.429 --rc geninfo_all_blocks=1 00:11:56.429 --rc geninfo_unexecuted_blocks=1 00:11:56.429 00:11:56.429 ' 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.429 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:56.430 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.689 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:02.059 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:02.059 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:02.059 Found net devices under 0000:af:00.0: cvl_0_0 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:02.059 Found net devices under 0000:af:00.1: cvl_0_1 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.059 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:02.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:12:02.319 00:12:02.319 --- 10.0.0.2 ping statistics --- 00:12:02.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.319 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:02.319 00:12:02.319 --- 10.0.0.1 ping statistics --- 00:12:02.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.319 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:02.319 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1831693 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1831693 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1831693 ']' 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.578 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.578 [2024-12-09 17:22:28.949806] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:12:02.578 [2024-12-09 17:22:28.949851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.578 [2024-12-09 17:22:29.025486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.578 [2024-12-09 17:22:29.066258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.578 [2024-12-09 17:22:29.066293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.578 [2024-12-09 17:22:29.066299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.578 [2024-12-09 17:22:29.066305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.578 [2024-12-09 17:22:29.066310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.578 [2024-12-09 17:22:29.067642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.578 [2024-12-09 17:22:29.067752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.578 [2024-12-09 17:22:29.067859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.578 [2024-12-09 17:22:29.067859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.837 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.837 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:02.837 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:02.837 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:02.837 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.837 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.837 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:02.837 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:02.837 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:02.837 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:02.837 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:03.094 "nvmf_tgt_1" 00:12:03.094 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:03.094 "nvmf_tgt_2" 00:12:03.094 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:03.094 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:03.094 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:03.094 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:03.352 true 00:12:03.352 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:03.352 true 00:12:03.352 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:03.352 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:03.611 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:03.611 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:03.611 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:03.611 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.611 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:03.611 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:03.611 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:03.611 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.611 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:03.611 rmmod nvme_tcp 00:12:03.611 rmmod nvme_fabrics 00:12:03.611 rmmod nvme_keyring 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1831693 ']' 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1831693 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1831693 ']' 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1831693 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1831693 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1831693' 00:12:03.611 killing process with pid 1831693 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1831693 00:12:03.611 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1831693 00:12:03.870 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.870 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.870 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.870 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:03.870 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:03.870 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.870 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.870 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.870 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.870 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.870 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.870 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.776 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.776 00:12:05.776 real 0m9.565s 00:12:05.776 user 0m7.151s 00:12:05.776 sys 0m4.904s 00:12:06.035 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.035 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:06.035 ************************************ 00:12:06.035 END TEST nvmf_multitarget 00:12:06.035 ************************************ 00:12:06.035 17:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:06.035 17:22:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:06.035 17:22:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.035 17:22:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:06.035 ************************************ 00:12:06.036 START TEST nvmf_rpc 00:12:06.036 ************************************ 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:06.036 * Looking for test storage... 00:12:06.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:06.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.036 --rc genhtml_branch_coverage=1 00:12:06.036 --rc genhtml_function_coverage=1 00:12:06.036 --rc genhtml_legend=1 00:12:06.036 --rc geninfo_all_blocks=1 00:12:06.036 --rc geninfo_unexecuted_blocks=1 00:12:06.036 00:12:06.036 ' 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:06.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.036 --rc genhtml_branch_coverage=1 00:12:06.036 --rc genhtml_function_coverage=1 00:12:06.036 --rc genhtml_legend=1 00:12:06.036 --rc geninfo_all_blocks=1 00:12:06.036 --rc geninfo_unexecuted_blocks=1 00:12:06.036 00:12:06.036 ' 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:06.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.036 --rc genhtml_branch_coverage=1 00:12:06.036 --rc genhtml_function_coverage=1 00:12:06.036 --rc genhtml_legend=1 00:12:06.036 --rc geninfo_all_blocks=1 00:12:06.036 --rc geninfo_unexecuted_blocks=1 00:12:06.036 00:12:06.036 ' 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:06.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.036 --rc genhtml_branch_coverage=1 00:12:06.036 --rc genhtml_function_coverage=1 00:12:06.036 --rc genhtml_legend=1 00:12:06.036 --rc geninfo_all_blocks=1 00:12:06.036 --rc geninfo_unexecuted_blocks=1 00:12:06.036 00:12:06.036 ' 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.036 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.295 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.295 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.295 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.295 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.295 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:06.295 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:06.295 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.295 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.295 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.295 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.295 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:06.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:06.296 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:12.864 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:12.864 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:12.864 Found net devices under 0000:af:00.0: cvl_0_0 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.864 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:12.865 Found net devices under 0000:af:00.1: cvl_0_1 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:12:12.865 00:12:12.865 --- 10.0.0.2 ping statistics --- 00:12:12.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.865 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:12:12.865 00:12:12.865 --- 10.0.0.1 ping statistics --- 00:12:12.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.865 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1835377 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1835377 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1835377 ']' 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.865 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.865 [2024-12-09 17:22:38.623911] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:12:12.865 [2024-12-09 17:22:38.623966] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.865 [2024-12-09 17:22:38.722881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.865 [2024-12-09 17:22:38.761941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.865 [2024-12-09 17:22:38.761979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.865 [2024-12-09 17:22:38.761986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.865 [2024-12-09 17:22:38.761992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.865 [2024-12-09 17:22:38.761996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.865 [2024-12-09 17:22:38.763412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.865 [2024-12-09 17:22:38.763521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.865 [2024-12-09 17:22:38.763602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.865 [2024-12-09 17:22:38.763603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:13.124 "tick_rate": 2100000000, 00:12:13.124 "poll_groups": [ 00:12:13.124 { 00:12:13.124 "name": "nvmf_tgt_poll_group_000", 00:12:13.124 "admin_qpairs": 0, 00:12:13.124 "io_qpairs": 0, 00:12:13.124 "current_admin_qpairs": 0, 00:12:13.124 "current_io_qpairs": 0, 00:12:13.124 "pending_bdev_io": 0, 00:12:13.124 "completed_nvme_io": 0, 00:12:13.124 "transports": [] 00:12:13.124 }, 00:12:13.124 { 00:12:13.124 "name": "nvmf_tgt_poll_group_001", 00:12:13.124 "admin_qpairs": 0, 00:12:13.124 "io_qpairs": 0, 00:12:13.124 "current_admin_qpairs": 0, 00:12:13.124 "current_io_qpairs": 0, 00:12:13.124 "pending_bdev_io": 0, 00:12:13.124 "completed_nvme_io": 0, 00:12:13.124 "transports": [] 00:12:13.124 }, 00:12:13.124 { 00:12:13.124 "name": "nvmf_tgt_poll_group_002", 00:12:13.124 "admin_qpairs": 0, 00:12:13.124 "io_qpairs": 0, 00:12:13.124 "current_admin_qpairs": 0, 00:12:13.124 "current_io_qpairs": 0, 00:12:13.124 "pending_bdev_io": 0, 00:12:13.124 "completed_nvme_io": 0, 00:12:13.124 "transports": [] 00:12:13.124 }, 00:12:13.124 { 00:12:13.124 "name": "nvmf_tgt_poll_group_003", 00:12:13.124 "admin_qpairs": 0, 00:12:13.124 "io_qpairs": 0, 00:12:13.124 "current_admin_qpairs": 0, 00:12:13.124 "current_io_qpairs": 0, 00:12:13.124 "pending_bdev_io": 0, 00:12:13.124 "completed_nvme_io": 0, 00:12:13.124 "transports": [] 00:12:13.124 } 00:12:13.124 ] 00:12:13.124 }' 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.124 [2024-12-09 17:22:39.611375] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.124 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:13.124 "tick_rate": 2100000000, 00:12:13.124 "poll_groups": [ 00:12:13.124 { 00:12:13.124 "name": "nvmf_tgt_poll_group_000", 00:12:13.124 "admin_qpairs": 0, 00:12:13.124 "io_qpairs": 0, 00:12:13.124 "current_admin_qpairs": 0, 00:12:13.124 "current_io_qpairs": 0, 00:12:13.124 "pending_bdev_io": 0, 00:12:13.124 "completed_nvme_io": 0, 00:12:13.124 "transports": [ 00:12:13.124 { 00:12:13.124 "trtype": "TCP" 00:12:13.124 } 00:12:13.124 ] 00:12:13.124 }, 00:12:13.124 { 00:12:13.124 "name": "nvmf_tgt_poll_group_001", 00:12:13.124 "admin_qpairs": 0, 00:12:13.124 "io_qpairs": 0, 00:12:13.124 "current_admin_qpairs": 0, 00:12:13.124 "current_io_qpairs": 0, 00:12:13.124 "pending_bdev_io": 0, 00:12:13.124 "completed_nvme_io": 0, 00:12:13.124 "transports": [ 00:12:13.124 { 00:12:13.124 "trtype": "TCP" 00:12:13.124 } 00:12:13.124 ] 00:12:13.124 }, 00:12:13.124 { 00:12:13.124 "name": "nvmf_tgt_poll_group_002", 00:12:13.124 "admin_qpairs": 0, 00:12:13.124 "io_qpairs": 0, 00:12:13.124 "current_admin_qpairs": 0, 00:12:13.124 "current_io_qpairs": 0, 00:12:13.124 "pending_bdev_io": 0, 00:12:13.124 "completed_nvme_io": 0, 00:12:13.124 "transports": [ 00:12:13.124 { 00:12:13.124 "trtype": "TCP" 00:12:13.124 } 00:12:13.124 ] 00:12:13.124 }, 00:12:13.124 { 00:12:13.125 "name": "nvmf_tgt_poll_group_003", 00:12:13.125 "admin_qpairs": 0, 00:12:13.125 "io_qpairs": 0, 00:12:13.125 "current_admin_qpairs": 0, 00:12:13.125 "current_io_qpairs": 0, 00:12:13.125 "pending_bdev_io": 0, 00:12:13.125 "completed_nvme_io": 0, 00:12:13.125 "transports": [ 00:12:13.125 { 00:12:13.125 "trtype": "TCP" 00:12:13.125 } 00:12:13.125 ] 00:12:13.125 } 00:12:13.125 ] 00:12:13.125 }' 00:12:13.125 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:13.125 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:13.125 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:13.125 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 Malloc1 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 [2024-12-09 17:22:39.799333] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:13.384 [2024-12-09 17:22:39.833990] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:12:13.384 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:13.384 could not add new controller: failed to write to nvme-fabrics device 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.384 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.759 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.759 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:14.759 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.759 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:14.759 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:16.660 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:16.661 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.919 [2024-12-09 17:22:43.220293] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:12:16.919 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:16.919 could not add new controller: failed to write to nvme-fabrics device 00:12:16.919 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:16.919 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.919 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.919 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.919 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:16.919 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.919 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.919 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.919 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.855 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.855 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:17.855 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.855 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:17.855 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:20.394 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:20.394 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:20.394 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.394 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:20.394 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.394 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.395 [2024-12-09 17:22:46.531240] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.395 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.330 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.330 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:21.330 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.330 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:21.330 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.231 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.490 [2024-12-09 17:22:49.798115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.490 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.866 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.866 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:24.866 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.866 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:24.866 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.769 [2024-12-09 17:22:53.141296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.769 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.145 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.145 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:28.145 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.145 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:28.145 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:30.052 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:30.052 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:30.052 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.052 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.053 [2024-12-09 17:22:56.536734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.053 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.429 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.429 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:31.429 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.429 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:31.429 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 [2024-12-09 17:22:59.788228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.331 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.707 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.707 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:34.707 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.707 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:34.707 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:36.608 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:36.608 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:36.608 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.608 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:36.608 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.608 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:36.608 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.608 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.608 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:36.608 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:36.608 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.608 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:36.608 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.608 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:36.608 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.608 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.608 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.609 [2024-12-09 17:23:03.096794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.609 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.609 [2024-12-09 17:23:03.144916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 [2024-12-09 17:23:03.193038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 [2024-12-09 17:23:03.241229] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 [2024-12-09 17:23:03.289390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:36.869 "tick_rate": 2100000000, 00:12:36.869 "poll_groups": [ 00:12:36.869 { 00:12:36.869 "name": "nvmf_tgt_poll_group_000", 00:12:36.869 "admin_qpairs": 2, 00:12:36.869 "io_qpairs": 168, 00:12:36.869 "current_admin_qpairs": 0, 00:12:36.869 "current_io_qpairs": 0, 00:12:36.869 "pending_bdev_io": 0, 00:12:36.869 "completed_nvme_io": 270, 00:12:36.869 "transports": [ 00:12:36.869 { 00:12:36.869 "trtype": "TCP" 00:12:36.869 } 00:12:36.869 ] 00:12:36.869 }, 00:12:36.869 { 00:12:36.869 "name": "nvmf_tgt_poll_group_001", 00:12:36.869 "admin_qpairs": 2, 00:12:36.869 "io_qpairs": 168, 00:12:36.869 "current_admin_qpairs": 0, 00:12:36.869 "current_io_qpairs": 0, 00:12:36.869 "pending_bdev_io": 0, 00:12:36.869 "completed_nvme_io": 218, 00:12:36.869 "transports": [ 00:12:36.869 { 00:12:36.869 "trtype": "TCP" 00:12:36.869 } 00:12:36.869 ] 00:12:36.869 }, 00:12:36.869 { 00:12:36.869 "name": "nvmf_tgt_poll_group_002", 00:12:36.869 "admin_qpairs": 1, 00:12:36.869 "io_qpairs": 168, 00:12:36.869 "current_admin_qpairs": 0, 00:12:36.869 "current_io_qpairs": 0, 00:12:36.869 "pending_bdev_io": 0, 00:12:36.869 "completed_nvme_io": 267, 00:12:36.869 "transports": [ 00:12:36.869 { 00:12:36.869 "trtype": "TCP" 00:12:36.869 } 00:12:36.869 ] 00:12:36.869 }, 00:12:36.869 { 00:12:36.869 "name": "nvmf_tgt_poll_group_003", 00:12:36.869 "admin_qpairs": 2, 00:12:36.869 "io_qpairs": 168, 00:12:36.869 "current_admin_qpairs": 0, 00:12:36.869 "current_io_qpairs": 0, 00:12:36.869 "pending_bdev_io": 0, 00:12:36.869 "completed_nvme_io": 267, 00:12:36.869 "transports": [ 00:12:36.869 { 00:12:36.869 "trtype": "TCP" 00:12:36.869 } 00:12:36.869 ] 00:12:36.869 } 00:12:36.869 ] 00:12:36.869 }' 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:36.869 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.128 rmmod nvme_tcp 00:12:37.128 rmmod nvme_fabrics 00:12:37.128 rmmod nvme_keyring 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1835377 ']' 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1835377 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1835377 ']' 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1835377 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1835377 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1835377' 00:12:37.128 killing process with pid 1835377 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1835377 00:12:37.128 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1835377 00:12:37.387 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.387 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.387 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.387 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:37.387 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:37.387 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.387 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.387 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.387 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.387 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.387 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.387 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.290 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.290 00:12:39.290 real 0m33.422s 00:12:39.290 user 1m41.338s 00:12:39.290 sys 0m6.559s 00:12:39.290 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.290 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.290 ************************************ 00:12:39.290 END TEST nvmf_rpc 00:12:39.290 ************************************ 00:12:39.550 17:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:39.550 17:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.550 17:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.550 17:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.550 ************************************ 00:12:39.550 START TEST nvmf_invalid 00:12:39.550 ************************************ 00:12:39.550 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:39.550 * Looking for test storage... 00:12:39.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.550 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:39.550 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:39.550 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.550 --rc genhtml_branch_coverage=1 00:12:39.550 --rc genhtml_function_coverage=1 00:12:39.550 --rc genhtml_legend=1 00:12:39.550 --rc geninfo_all_blocks=1 00:12:39.550 --rc geninfo_unexecuted_blocks=1 00:12:39.550 00:12:39.550 ' 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.550 --rc genhtml_branch_coverage=1 00:12:39.550 --rc genhtml_function_coverage=1 00:12:39.550 --rc genhtml_legend=1 00:12:39.550 --rc geninfo_all_blocks=1 00:12:39.550 --rc geninfo_unexecuted_blocks=1 00:12:39.550 00:12:39.550 ' 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.550 --rc genhtml_branch_coverage=1 00:12:39.550 --rc genhtml_function_coverage=1 00:12:39.550 --rc genhtml_legend=1 00:12:39.550 --rc geninfo_all_blocks=1 00:12:39.550 --rc geninfo_unexecuted_blocks=1 00:12:39.550 00:12:39.550 ' 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.550 --rc genhtml_branch_coverage=1 00:12:39.550 --rc genhtml_function_coverage=1 00:12:39.550 --rc genhtml_legend=1 00:12:39.550 --rc geninfo_all_blocks=1 00:12:39.550 --rc geninfo_unexecuted_blocks=1 00:12:39.550 00:12:39.550 ' 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.550 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.551 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.551 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.551 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.551 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.810 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:46.379 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:46.379 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.379 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:46.380 Found net devices under 0000:af:00.0: cvl_0_0 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:46.380 Found net devices under 0000:af:00.1: cvl_0_1 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:46.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:12:46.380 00:12:46.380 --- 10.0.0.2 ping statistics --- 00:12:46.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.380 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:12:46.380 00:12:46.380 --- 10.0.0.1 ping statistics --- 00:12:46.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.380 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1843103 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1843103 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1843103 ']' 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.380 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:46.380 [2024-12-09 17:23:12.024641] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:12:46.380 [2024-12-09 17:23:12.024688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.380 [2024-12-09 17:23:12.100129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.380 [2024-12-09 17:23:12.139575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.380 [2024-12-09 17:23:12.139613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.380 [2024-12-09 17:23:12.139621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.380 [2024-12-09 17:23:12.139627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.380 [2024-12-09 17:23:12.139632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.380 [2024-12-09 17:23:12.141090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.380 [2024-12-09 17:23:12.141222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.380 [2024-12-09 17:23:12.141264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.380 [2024-12-09 17:23:12.141265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.380 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.380 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:46.380 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.380 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.380 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:46.380 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.380 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:46.380 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23612 00:12:46.380 [2024-12-09 17:23:12.451239] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:46.380 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:46.380 { 00:12:46.380 "nqn": "nqn.2016-06.io.spdk:cnode23612", 00:12:46.380 "tgt_name": "foobar", 00:12:46.380 "method": "nvmf_create_subsystem", 00:12:46.380 "req_id": 1 00:12:46.380 } 00:12:46.380 Got JSON-RPC error response 00:12:46.380 response: 00:12:46.380 { 00:12:46.380 "code": -32603, 00:12:46.380 "message": "Unable to find target foobar" 00:12:46.380 }' 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:46.381 { 00:12:46.381 "nqn": "nqn.2016-06.io.spdk:cnode23612", 00:12:46.381 "tgt_name": "foobar", 00:12:46.381 "method": "nvmf_create_subsystem", 00:12:46.381 "req_id": 1 00:12:46.381 } 00:12:46.381 Got JSON-RPC error response 00:12:46.381 response: 00:12:46.381 { 00:12:46.381 "code": -32603, 00:12:46.381 "message": "Unable to find target foobar" 00:12:46.381 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8267 00:12:46.381 [2024-12-09 17:23:12.659957] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8267: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:46.381 { 00:12:46.381 "nqn": "nqn.2016-06.io.spdk:cnode8267", 00:12:46.381 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:46.381 "method": "nvmf_create_subsystem", 00:12:46.381 "req_id": 1 00:12:46.381 } 00:12:46.381 Got JSON-RPC error response 00:12:46.381 response: 00:12:46.381 { 00:12:46.381 "code": -32602, 00:12:46.381 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:46.381 }' 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:46.381 { 00:12:46.381 "nqn": "nqn.2016-06.io.spdk:cnode8267", 00:12:46.381 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:46.381 "method": "nvmf_create_subsystem", 00:12:46.381 "req_id": 1 00:12:46.381 } 00:12:46.381 Got JSON-RPC error response 00:12:46.381 response: 00:12:46.381 { 00:12:46.381 "code": -32602, 00:12:46.381 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:46.381 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17766 00:12:46.381 [2024-12-09 17:23:12.856590] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17766: invalid model number 'SPDK_Controller' 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:46.381 { 00:12:46.381 "nqn": "nqn.2016-06.io.spdk:cnode17766", 00:12:46.381 "model_number": "SPDK_Controller\u001f", 00:12:46.381 "method": "nvmf_create_subsystem", 00:12:46.381 "req_id": 1 00:12:46.381 } 00:12:46.381 Got JSON-RPC error response 00:12:46.381 response: 00:12:46.381 { 00:12:46.381 "code": -32602, 00:12:46.381 "message": "Invalid MN SPDK_Controller\u001f" 00:12:46.381 }' 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:46.381 { 00:12:46.381 "nqn": "nqn.2016-06.io.spdk:cnode17766", 00:12:46.381 "model_number": "SPDK_Controller\u001f", 00:12:46.381 "method": "nvmf_create_subsystem", 00:12:46.381 "req_id": 1 00:12:46.381 } 00:12:46.381 Got JSON-RPC error response 00:12:46.381 response: 00:12:46.381 { 00:12:46.381 "code": -32602, 00:12:46.381 "message": "Invalid MN SPDK_Controller\u001f" 00:12:46.381 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:46.381 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:46.640 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:46.640 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:46.640 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.640 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.640 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ o == \- ]] 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'oe"ThplMY]PCXIyoeUv#5' 00:12:46.641 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'oe"ThplMY]PCXIyoeUv#5' nqn.2016-06.io.spdk:cnode2896 00:12:46.900 [2024-12-09 17:23:13.209780] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2896: invalid serial number 'oe"ThplMY]PCXIyoeUv#5' 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:46.900 { 00:12:46.900 "nqn": "nqn.2016-06.io.spdk:cnode2896", 00:12:46.900 "serial_number": "oe\"ThplMY]PCXIyoeUv#5", 00:12:46.900 "method": "nvmf_create_subsystem", 00:12:46.900 "req_id": 1 00:12:46.900 } 00:12:46.900 Got JSON-RPC error response 00:12:46.900 response: 00:12:46.900 { 00:12:46.900 "code": -32602, 00:12:46.900 "message": "Invalid SN oe\"ThplMY]PCXIyoeUv#5" 00:12:46.900 }' 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:46.900 { 00:12:46.900 "nqn": "nqn.2016-06.io.spdk:cnode2896", 00:12:46.900 "serial_number": "oe\"ThplMY]PCXIyoeUv#5", 00:12:46.900 "method": "nvmf_create_subsystem", 00:12:46.900 "req_id": 1 00:12:46.900 } 00:12:46.900 Got JSON-RPC error response 00:12:46.900 response: 00:12:46.900 { 00:12:46.900 "code": -32602, 00:12:46.900 "message": "Invalid SN oe\"ThplMY]PCXIyoeUv#5" 00:12:46.900 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.900 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:46.901 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:46.902 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.902 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.902 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:46.902 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:46.902 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:46.902 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.902 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ K == \- ]] 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Kux'\''2}an>ge]NgGtL,.wFs7~=;Jz_!C-q]2CiAB-' 00:12:47.160 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Kux'\''2}an>ge]NgGtL,.wFs7~=;Jz_!C-q]2CiAB-' nqn.2016-06.io.spdk:cnode23771 00:12:47.161 [2024-12-09 17:23:13.675283] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23771: invalid model number 'Kux'2}an>ge]NgGtL,.wFs7~=;Jz_!C-q]2CiAB-' 00:12:47.419 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:47.419 { 00:12:47.419 "nqn": "nqn.2016-06.io.spdk:cnode23771", 00:12:47.419 "model_number": "Kux'\''2}an>ge]NgGtL,.wFs7~=;Jz_!C-q\u007f]2CiAB-", 00:12:47.419 "method": "nvmf_create_subsystem", 00:12:47.419 "req_id": 1 00:12:47.419 } 00:12:47.419 Got JSON-RPC error response 00:12:47.419 response: 00:12:47.419 { 00:12:47.419 "code": -32602, 00:12:47.419 "message": "Invalid MN Kux'\''2}an>ge]NgGtL,.wFs7~=;Jz_!C-q\u007f]2CiAB-" 00:12:47.419 }' 00:12:47.419 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:47.419 { 00:12:47.419 "nqn": "nqn.2016-06.io.spdk:cnode23771", 00:12:47.419 "model_number": "Kux'2}an>ge]NgGtL,.wFs7~=;Jz_!C-q\u007f]2CiAB-", 00:12:47.419 "method": "nvmf_create_subsystem", 00:12:47.419 "req_id": 1 00:12:47.419 } 00:12:47.419 Got JSON-RPC error response 00:12:47.419 response: 00:12:47.419 { 00:12:47.419 "code": -32602, 00:12:47.419 "message": "Invalid MN Kux'2}an>ge]NgGtL,.wFs7~=;Jz_!C-q\u007f]2CiAB-" 00:12:47.419 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:47.419 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:47.419 [2024-12-09 17:23:13.872012] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.419 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:47.677 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:47.677 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:47.677 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:47.677 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:47.677 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:47.935 [2024-12-09 17:23:14.289383] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:47.935 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:47.935 { 00:12:47.935 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:47.935 "listen_address": { 00:12:47.935 "trtype": "tcp", 00:12:47.935 "traddr": "", 00:12:47.935 "trsvcid": "4421" 00:12:47.935 }, 00:12:47.935 "method": "nvmf_subsystem_remove_listener", 00:12:47.935 "req_id": 1 00:12:47.935 } 00:12:47.935 Got JSON-RPC error response 00:12:47.935 response: 00:12:47.935 { 00:12:47.935 "code": -32602, 00:12:47.935 "message": "Invalid parameters" 00:12:47.935 }' 00:12:47.935 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:47.935 { 00:12:47.935 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:47.935 "listen_address": { 00:12:47.935 "trtype": "tcp", 00:12:47.935 "traddr": "", 00:12:47.935 "trsvcid": "4421" 00:12:47.935 }, 00:12:47.935 "method": "nvmf_subsystem_remove_listener", 00:12:47.935 "req_id": 1 00:12:47.935 } 00:12:47.935 Got JSON-RPC error response 00:12:47.935 response: 00:12:47.935 { 00:12:47.935 "code": -32602, 00:12:47.935 "message": "Invalid parameters" 00:12:47.935 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:47.935 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12104 -i 0 00:12:48.194 [2024-12-09 17:23:14.506067] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12104: invalid cntlid range [0-65519] 00:12:48.194 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:48.194 { 00:12:48.194 "nqn": "nqn.2016-06.io.spdk:cnode12104", 00:12:48.194 "min_cntlid": 0, 00:12:48.194 "method": "nvmf_create_subsystem", 00:12:48.194 "req_id": 1 00:12:48.194 } 00:12:48.194 Got JSON-RPC error response 00:12:48.194 response: 00:12:48.194 { 00:12:48.194 "code": -32602, 00:12:48.194 "message": "Invalid cntlid range [0-65519]" 00:12:48.194 }' 00:12:48.194 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:48.194 { 00:12:48.194 "nqn": "nqn.2016-06.io.spdk:cnode12104", 00:12:48.194 "min_cntlid": 0, 00:12:48.194 "method": "nvmf_create_subsystem", 00:12:48.194 "req_id": 1 00:12:48.194 } 00:12:48.194 Got JSON-RPC error response 00:12:48.194 response: 00:12:48.194 { 00:12:48.194 "code": -32602, 00:12:48.194 "message": "Invalid cntlid range [0-65519]" 00:12:48.194 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.194 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20684 -i 65520 00:12:48.194 [2024-12-09 17:23:14.722775] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20684: invalid cntlid range [65520-65519] 00:12:48.452 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:48.452 { 00:12:48.452 "nqn": "nqn.2016-06.io.spdk:cnode20684", 00:12:48.452 "min_cntlid": 65520, 00:12:48.452 "method": "nvmf_create_subsystem", 00:12:48.452 "req_id": 1 00:12:48.452 } 00:12:48.452 Got JSON-RPC error response 00:12:48.452 response: 00:12:48.452 { 00:12:48.452 "code": -32602, 00:12:48.452 "message": "Invalid cntlid range [65520-65519]" 00:12:48.452 }' 00:12:48.452 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:48.452 { 00:12:48.452 "nqn": "nqn.2016-06.io.spdk:cnode20684", 00:12:48.452 "min_cntlid": 65520, 00:12:48.452 "method": "nvmf_create_subsystem", 00:12:48.452 "req_id": 1 00:12:48.452 } 00:12:48.452 Got JSON-RPC error response 00:12:48.452 response: 00:12:48.452 { 00:12:48.452 "code": -32602, 00:12:48.452 "message": "Invalid cntlid range [65520-65519]" 00:12:48.452 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.452 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2649 -I 0 00:12:48.452 [2024-12-09 17:23:14.923464] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2649: invalid cntlid range [1-0] 00:12:48.452 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:48.452 { 00:12:48.452 "nqn": "nqn.2016-06.io.spdk:cnode2649", 00:12:48.452 "max_cntlid": 0, 00:12:48.452 "method": "nvmf_create_subsystem", 00:12:48.452 "req_id": 1 00:12:48.452 } 00:12:48.452 Got JSON-RPC error response 00:12:48.452 response: 00:12:48.452 { 00:12:48.452 "code": -32602, 00:12:48.452 "message": "Invalid cntlid range [1-0]" 00:12:48.452 }' 00:12:48.452 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:48.452 { 00:12:48.452 "nqn": "nqn.2016-06.io.spdk:cnode2649", 00:12:48.452 "max_cntlid": 0, 00:12:48.452 "method": "nvmf_create_subsystem", 00:12:48.452 "req_id": 1 00:12:48.452 } 00:12:48.452 Got JSON-RPC error response 00:12:48.452 response: 00:12:48.452 { 00:12:48.452 "code": -32602, 00:12:48.452 "message": "Invalid cntlid range [1-0]" 00:12:48.452 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.452 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode780 -I 65520 00:12:48.715 [2024-12-09 17:23:15.124135] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode780: invalid cntlid range [1-65520] 00:12:48.715 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:48.715 { 00:12:48.715 "nqn": "nqn.2016-06.io.spdk:cnode780", 00:12:48.715 "max_cntlid": 65520, 00:12:48.715 "method": "nvmf_create_subsystem", 00:12:48.715 "req_id": 1 00:12:48.715 } 00:12:48.715 Got JSON-RPC error response 00:12:48.715 response: 00:12:48.715 { 00:12:48.715 "code": -32602, 00:12:48.715 "message": "Invalid cntlid range [1-65520]" 00:12:48.715 }' 00:12:48.715 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:48.715 { 00:12:48.715 "nqn": "nqn.2016-06.io.spdk:cnode780", 00:12:48.715 "max_cntlid": 65520, 00:12:48.715 "method": "nvmf_create_subsystem", 00:12:48.715 "req_id": 1 00:12:48.715 } 00:12:48.715 Got JSON-RPC error response 00:12:48.715 response: 00:12:48.715 { 00:12:48.715 "code": -32602, 00:12:48.715 "message": "Invalid cntlid range [1-65520]" 00:12:48.715 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.715 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25699 -i 6 -I 5 00:12:48.974 [2024-12-09 17:23:15.316828] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25699: invalid cntlid range [6-5] 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:48.974 { 00:12:48.974 "nqn": "nqn.2016-06.io.spdk:cnode25699", 00:12:48.974 "min_cntlid": 6, 00:12:48.974 "max_cntlid": 5, 00:12:48.974 "method": "nvmf_create_subsystem", 00:12:48.974 "req_id": 1 00:12:48.974 } 00:12:48.974 Got JSON-RPC error response 00:12:48.974 response: 00:12:48.974 { 00:12:48.974 "code": -32602, 00:12:48.974 "message": "Invalid cntlid range [6-5]" 00:12:48.974 }' 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:48.974 { 00:12:48.974 "nqn": "nqn.2016-06.io.spdk:cnode25699", 00:12:48.974 "min_cntlid": 6, 00:12:48.974 "max_cntlid": 5, 00:12:48.974 "method": "nvmf_create_subsystem", 00:12:48.974 "req_id": 1 00:12:48.974 } 00:12:48.974 Got JSON-RPC error response 00:12:48.974 response: 00:12:48.974 { 00:12:48.974 "code": -32602, 00:12:48.974 "message": "Invalid cntlid range [6-5]" 00:12:48.974 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:48.974 { 00:12:48.974 "name": "foobar", 00:12:48.974 "method": "nvmf_delete_target", 00:12:48.974 "req_id": 1 00:12:48.974 } 00:12:48.974 Got JSON-RPC error response 00:12:48.974 response: 00:12:48.974 { 00:12:48.974 "code": -32602, 00:12:48.974 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:48.974 }' 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:48.974 { 00:12:48.974 "name": "foobar", 00:12:48.974 "method": "nvmf_delete_target", 00:12:48.974 "req_id": 1 00:12:48.974 } 00:12:48.974 Got JSON-RPC error response 00:12:48.974 response: 00:12:48.974 { 00:12:48.974 "code": -32602, 00:12:48.974 "message": "The specified target doesn't exist, cannot delete it." 00:12:48.974 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.974 rmmod nvme_tcp 00:12:48.974 rmmod nvme_fabrics 00:12:48.974 rmmod nvme_keyring 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1843103 ']' 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1843103 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1843103 ']' 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1843103 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.974 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1843103 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1843103' 00:12:49.233 killing process with pid 1843103 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1843103 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1843103 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.233 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.766 17:23:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:51.766 00:12:51.766 real 0m11.891s 00:12:51.766 user 0m18.457s 00:12:51.766 sys 0m5.340s 00:12:51.766 17:23:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.766 17:23:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:51.766 ************************************ 00:12:51.766 END TEST nvmf_invalid 00:12:51.766 ************************************ 00:12:51.766 17:23:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:51.766 17:23:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:51.766 17:23:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.766 17:23:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.766 ************************************ 00:12:51.766 START TEST nvmf_connect_stress 00:12:51.766 ************************************ 00:12:51.766 17:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:51.766 * Looking for test storage... 00:12:51.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.766 17:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:51.766 17:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:51.766 17:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:51.766 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:51.766 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:51.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.767 --rc genhtml_branch_coverage=1 00:12:51.767 --rc genhtml_function_coverage=1 00:12:51.767 --rc genhtml_legend=1 00:12:51.767 --rc geninfo_all_blocks=1 00:12:51.767 --rc geninfo_unexecuted_blocks=1 00:12:51.767 00:12:51.767 ' 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:51.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.767 --rc genhtml_branch_coverage=1 00:12:51.767 --rc genhtml_function_coverage=1 00:12:51.767 --rc genhtml_legend=1 00:12:51.767 --rc geninfo_all_blocks=1 00:12:51.767 --rc geninfo_unexecuted_blocks=1 00:12:51.767 00:12:51.767 ' 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:51.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.767 --rc genhtml_branch_coverage=1 00:12:51.767 --rc genhtml_function_coverage=1 00:12:51.767 --rc genhtml_legend=1 00:12:51.767 --rc geninfo_all_blocks=1 00:12:51.767 --rc geninfo_unexecuted_blocks=1 00:12:51.767 00:12:51.767 ' 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:51.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.767 --rc genhtml_branch_coverage=1 00:12:51.767 --rc genhtml_function_coverage=1 00:12:51.767 --rc genhtml_legend=1 00:12:51.767 --rc geninfo_all_blocks=1 00:12:51.767 --rc geninfo_unexecuted_blocks=1 00:12:51.767 00:12:51.767 ' 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:51.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.767 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.768 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:51.768 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:51.768 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:51.768 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:58.337 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:58.337 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:58.337 Found net devices under 0000:af:00.0: cvl_0_0 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:58.337 Found net devices under 0000:af:00.1: cvl_0_1 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:58.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:12:58.337 00:12:58.337 --- 10.0.0.2 ping statistics --- 00:12:58.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.337 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:12:58.337 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:58.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:12:58.338 00:12:58.338 --- 10.0.0.1 ping statistics --- 00:12:58.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.338 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:12:58.338 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.338 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:58.338 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:58.338 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.338 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:58.338 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:58.338 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.338 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:58.338 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1847201 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1847201 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1847201 ']' 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.338 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.338 [2024-12-09 17:23:24.062616] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:12:58.338 [2024-12-09 17:23:24.062665] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.338 [2024-12-09 17:23:24.142942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:58.338 [2024-12-09 17:23:24.185391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.338 [2024-12-09 17:23:24.185427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.338 [2024-12-09 17:23:24.185434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.338 [2024-12-09 17:23:24.185441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.338 [2024-12-09 17:23:24.185447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.338 [2024-12-09 17:23:24.186673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.338 [2024-12-09 17:23:24.186782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.338 [2024-12-09 17:23:24.186784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.601 [2024-12-09 17:23:24.937054] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.601 [2024-12-09 17:23:24.961295] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.601 NULL1 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1847442 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.601 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.868 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.868 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:12:58.868 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.868 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.868 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.479 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.479 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:12:59.479 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.479 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.479 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.762 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.762 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:12:59.762 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.762 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.762 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.021 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:00.021 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.021 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.021 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.279 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.279 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:00.279 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.279 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.279 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.537 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.537 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:00.537 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.537 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.537 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.795 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.795 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:00.795 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.795 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.795 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.362 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.362 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:01.362 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.362 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.362 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.620 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.620 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:01.620 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.620 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.620 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.878 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.878 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:01.878 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.878 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.878 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.136 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.136 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:02.136 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.136 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.136 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.702 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.702 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:02.702 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.702 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.702 17:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.961 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.961 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:02.961 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.961 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.961 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.219 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.219 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:03.219 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.219 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.219 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.477 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.477 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:03.477 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.477 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.477 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.736 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.736 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:03.736 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.736 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.736 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.301 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.301 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:04.301 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.301 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.301 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.560 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.560 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:04.560 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.560 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.560 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.818 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.818 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:04.818 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.818 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.818 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.077 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.077 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:05.077 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.077 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.077 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.643 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.643 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:05.643 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.643 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.643 17:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.902 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.902 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:05.902 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.902 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.902 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.160 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.160 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:06.160 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.160 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.160 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.418 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.418 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:06.418 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.418 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.418 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.677 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.677 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:06.677 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.677 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.677 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.244 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.244 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:07.244 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.244 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.244 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.503 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.503 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:07.503 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.503 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.503 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.761 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.761 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:07.761 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.761 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.761 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.019 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.019 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:08.019 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.019 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.019 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.585 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.585 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:08.585 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.585 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.585 17:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.842 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.842 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:08.842 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.842 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.842 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.842 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1847442 00:13:09.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1847442) - No such process 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1847442 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:09.100 rmmod nvme_tcp 00:13:09.100 rmmod nvme_fabrics 00:13:09.100 rmmod nvme_keyring 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1847201 ']' 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1847201 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1847201 ']' 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1847201 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1847201 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1847201' 00:13:09.100 killing process with pid 1847201 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1847201 00:13:09.100 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1847201 00:13:09.358 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:09.358 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:09.358 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:09.358 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:09.359 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:09.359 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:09.359 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:09.359 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.359 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:09.359 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.359 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.359 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.892 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:11.892 00:13:11.892 real 0m19.980s 00:13:11.892 user 0m42.419s 00:13:11.892 sys 0m8.638s 00:13:11.892 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.892 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.892 ************************************ 00:13:11.892 END TEST nvmf_connect_stress 00:13:11.892 ************************************ 00:13:11.893 17:23:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:11.893 17:23:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:11.893 17:23:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.893 17:23:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.893 ************************************ 00:13:11.893 START TEST nvmf_fused_ordering 00:13:11.893 ************************************ 00:13:11.893 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:11.893 * Looking for test storage... 00:13:11.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.893 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:11.893 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:11.893 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:11.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.893 --rc genhtml_branch_coverage=1 00:13:11.893 --rc genhtml_function_coverage=1 00:13:11.893 --rc genhtml_legend=1 00:13:11.893 --rc geninfo_all_blocks=1 00:13:11.893 --rc geninfo_unexecuted_blocks=1 00:13:11.893 00:13:11.893 ' 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:11.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.893 --rc genhtml_branch_coverage=1 00:13:11.893 --rc genhtml_function_coverage=1 00:13:11.893 --rc genhtml_legend=1 00:13:11.893 --rc geninfo_all_blocks=1 00:13:11.893 --rc geninfo_unexecuted_blocks=1 00:13:11.893 00:13:11.893 ' 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:11.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.893 --rc genhtml_branch_coverage=1 00:13:11.893 --rc genhtml_function_coverage=1 00:13:11.893 --rc genhtml_legend=1 00:13:11.893 --rc geninfo_all_blocks=1 00:13:11.893 --rc geninfo_unexecuted_blocks=1 00:13:11.893 00:13:11.893 ' 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:11.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.893 --rc genhtml_branch_coverage=1 00:13:11.893 --rc genhtml_function_coverage=1 00:13:11.893 --rc genhtml_legend=1 00:13:11.893 --rc geninfo_all_blocks=1 00:13:11.893 --rc geninfo_unexecuted_blocks=1 00:13:11.893 00:13:11.893 ' 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.893 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:11.894 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.465 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:18.466 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:18.466 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:18.466 Found net devices under 0000:af:00.0: cvl_0_0 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:18.466 Found net devices under 0000:af:00.1: cvl_0_1 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:18.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:13:18.466 00:13:18.466 --- 10.0.0.2 ping statistics --- 00:13:18.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.466 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:13:18.466 00:13:18.466 --- 10.0.0.1 ping statistics --- 00:13:18.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.466 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:18.466 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:18.466 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:18.466 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:18.466 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:18.466 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:18.466 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1852713 00:13:18.466 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1852713 00:13:18.466 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:18.466 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1852713 ']' 00:13:18.466 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.466 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.466 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:18.467 [2024-12-09 17:23:44.082015] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:13:18.467 [2024-12-09 17:23:44.082059] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.467 [2024-12-09 17:23:44.158325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.467 [2024-12-09 17:23:44.197306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.467 [2024-12-09 17:23:44.197340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.467 [2024-12-09 17:23:44.197347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.467 [2024-12-09 17:23:44.197353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.467 [2024-12-09 17:23:44.197358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.467 [2024-12-09 17:23:44.197831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:18.467 [2024-12-09 17:23:44.337275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:18.467 [2024-12-09 17:23:44.357455] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:18.467 NULL1 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.467 17:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:18.467 [2024-12-09 17:23:44.417238] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:13:18.467 [2024-12-09 17:23:44.417268] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852738 ] 00:13:18.467 Attached to nqn.2016-06.io.spdk:cnode1 00:13:18.467 Namespace ID: 1 size: 1GB 00:13:18.467 fused_ordering(0) 00:13:18.467 fused_ordering(1) 00:13:18.467 fused_ordering(2) 00:13:18.467 fused_ordering(3) 00:13:18.467 fused_ordering(4) 00:13:18.467 fused_ordering(5) 00:13:18.467 fused_ordering(6) 00:13:18.467 fused_ordering(7) 00:13:18.467 fused_ordering(8) 00:13:18.467 fused_ordering(9) 00:13:18.467 fused_ordering(10) 00:13:18.467 fused_ordering(11) 00:13:18.467 fused_ordering(12) 00:13:18.467 fused_ordering(13) 00:13:18.467 fused_ordering(14) 00:13:18.467 fused_ordering(15) 00:13:18.467 fused_ordering(16) 00:13:18.467 fused_ordering(17) 00:13:18.467 fused_ordering(18) 00:13:18.467 fused_ordering(19) 00:13:18.467 fused_ordering(20) 00:13:18.467 fused_ordering(21) 00:13:18.467 fused_ordering(22) 00:13:18.467 fused_ordering(23) 00:13:18.467 fused_ordering(24) 00:13:18.467 fused_ordering(25) 00:13:18.467 fused_ordering(26) 00:13:18.467 fused_ordering(27) 00:13:18.467 fused_ordering(28) 00:13:18.467 fused_ordering(29) 00:13:18.467 fused_ordering(30) 00:13:18.467 fused_ordering(31) 00:13:18.467 fused_ordering(32) 00:13:18.467 fused_ordering(33) 00:13:18.467 fused_ordering(34) 00:13:18.467 fused_ordering(35) 00:13:18.467 fused_ordering(36) 00:13:18.467 fused_ordering(37) 00:13:18.467 fused_ordering(38) 00:13:18.467 fused_ordering(39) 00:13:18.467 fused_ordering(40) 00:13:18.467 fused_ordering(41) 00:13:18.467 fused_ordering(42) 00:13:18.467 fused_ordering(43) 00:13:18.467 fused_ordering(44) 00:13:18.467 fused_ordering(45) 00:13:18.467 fused_ordering(46) 00:13:18.467 fused_ordering(47) 00:13:18.467 fused_ordering(48) 00:13:18.467 fused_ordering(49) 00:13:18.467 fused_ordering(50) 00:13:18.467 fused_ordering(51) 00:13:18.467 fused_ordering(52) 00:13:18.467 fused_ordering(53) 00:13:18.467 fused_ordering(54) 00:13:18.467 fused_ordering(55) 00:13:18.467 fused_ordering(56) 00:13:18.467 fused_ordering(57) 00:13:18.467 fused_ordering(58) 00:13:18.467 fused_ordering(59) 00:13:18.467 fused_ordering(60) 00:13:18.467 fused_ordering(61) 00:13:18.467 fused_ordering(62) 00:13:18.467 fused_ordering(63) 00:13:18.467 fused_ordering(64) 00:13:18.467 fused_ordering(65) 00:13:18.467 fused_ordering(66) 00:13:18.467 fused_ordering(67) 00:13:18.467 fused_ordering(68) 00:13:18.467 fused_ordering(69) 00:13:18.467 fused_ordering(70) 00:13:18.467 fused_ordering(71) 00:13:18.467 fused_ordering(72) 00:13:18.467 fused_ordering(73) 00:13:18.467 fused_ordering(74) 00:13:18.467 fused_ordering(75) 00:13:18.467 fused_ordering(76) 00:13:18.467 fused_ordering(77) 00:13:18.467 fused_ordering(78) 00:13:18.467 fused_ordering(79) 00:13:18.467 fused_ordering(80) 00:13:18.467 fused_ordering(81) 00:13:18.467 fused_ordering(82) 00:13:18.467 fused_ordering(83) 00:13:18.467 fused_ordering(84) 00:13:18.467 fused_ordering(85) 00:13:18.467 fused_ordering(86) 00:13:18.467 fused_ordering(87) 00:13:18.467 fused_ordering(88) 00:13:18.467 fused_ordering(89) 00:13:18.467 fused_ordering(90) 00:13:18.467 fused_ordering(91) 00:13:18.467 fused_ordering(92) 00:13:18.467 fused_ordering(93) 00:13:18.467 fused_ordering(94) 00:13:18.467 fused_ordering(95) 00:13:18.467 fused_ordering(96) 00:13:18.467 fused_ordering(97) 00:13:18.467 fused_ordering(98) 00:13:18.467 fused_ordering(99) 00:13:18.467 fused_ordering(100) 00:13:18.467 fused_ordering(101) 00:13:18.467 fused_ordering(102) 00:13:18.467 fused_ordering(103) 00:13:18.467 fused_ordering(104) 00:13:18.467 fused_ordering(105) 00:13:18.467 fused_ordering(106) 00:13:18.467 fused_ordering(107) 00:13:18.467 fused_ordering(108) 00:13:18.467 fused_ordering(109) 00:13:18.467 fused_ordering(110) 00:13:18.467 fused_ordering(111) 00:13:18.467 fused_ordering(112) 00:13:18.467 fused_ordering(113) 00:13:18.467 fused_ordering(114) 00:13:18.467 fused_ordering(115) 00:13:18.467 fused_ordering(116) 00:13:18.467 fused_ordering(117) 00:13:18.467 fused_ordering(118) 00:13:18.467 fused_ordering(119) 00:13:18.467 fused_ordering(120) 00:13:18.467 fused_ordering(121) 00:13:18.467 fused_ordering(122) 00:13:18.467 fused_ordering(123) 00:13:18.467 fused_ordering(124) 00:13:18.467 fused_ordering(125) 00:13:18.467 fused_ordering(126) 00:13:18.467 fused_ordering(127) 00:13:18.467 fused_ordering(128) 00:13:18.467 fused_ordering(129) 00:13:18.467 fused_ordering(130) 00:13:18.467 fused_ordering(131) 00:13:18.467 fused_ordering(132) 00:13:18.467 fused_ordering(133) 00:13:18.467 fused_ordering(134) 00:13:18.467 fused_ordering(135) 00:13:18.467 fused_ordering(136) 00:13:18.467 fused_ordering(137) 00:13:18.467 fused_ordering(138) 00:13:18.467 fused_ordering(139) 00:13:18.467 fused_ordering(140) 00:13:18.467 fused_ordering(141) 00:13:18.467 fused_ordering(142) 00:13:18.467 fused_ordering(143) 00:13:18.467 fused_ordering(144) 00:13:18.467 fused_ordering(145) 00:13:18.467 fused_ordering(146) 00:13:18.467 fused_ordering(147) 00:13:18.467 fused_ordering(148) 00:13:18.467 fused_ordering(149) 00:13:18.467 fused_ordering(150) 00:13:18.467 fused_ordering(151) 00:13:18.467 fused_ordering(152) 00:13:18.467 fused_ordering(153) 00:13:18.467 fused_ordering(154) 00:13:18.467 fused_ordering(155) 00:13:18.468 fused_ordering(156) 00:13:18.468 fused_ordering(157) 00:13:18.468 fused_ordering(158) 00:13:18.468 fused_ordering(159) 00:13:18.468 fused_ordering(160) 00:13:18.468 fused_ordering(161) 00:13:18.468 fused_ordering(162) 00:13:18.468 fused_ordering(163) 00:13:18.468 fused_ordering(164) 00:13:18.468 fused_ordering(165) 00:13:18.468 fused_ordering(166) 00:13:18.468 fused_ordering(167) 00:13:18.468 fused_ordering(168) 00:13:18.468 fused_ordering(169) 00:13:18.468 fused_ordering(170) 00:13:18.468 fused_ordering(171) 00:13:18.468 fused_ordering(172) 00:13:18.468 fused_ordering(173) 00:13:18.468 fused_ordering(174) 00:13:18.468 fused_ordering(175) 00:13:18.468 fused_ordering(176) 00:13:18.468 fused_ordering(177) 00:13:18.468 fused_ordering(178) 00:13:18.468 fused_ordering(179) 00:13:18.468 fused_ordering(180) 00:13:18.468 fused_ordering(181) 00:13:18.468 fused_ordering(182) 00:13:18.468 fused_ordering(183) 00:13:18.468 fused_ordering(184) 00:13:18.468 fused_ordering(185) 00:13:18.468 fused_ordering(186) 00:13:18.468 fused_ordering(187) 00:13:18.468 fused_ordering(188) 00:13:18.468 fused_ordering(189) 00:13:18.468 fused_ordering(190) 00:13:18.468 fused_ordering(191) 00:13:18.468 fused_ordering(192) 00:13:18.468 fused_ordering(193) 00:13:18.468 fused_ordering(194) 00:13:18.468 fused_ordering(195) 00:13:18.468 fused_ordering(196) 00:13:18.468 fused_ordering(197) 00:13:18.468 fused_ordering(198) 00:13:18.468 fused_ordering(199) 00:13:18.468 fused_ordering(200) 00:13:18.468 fused_ordering(201) 00:13:18.468 fused_ordering(202) 00:13:18.468 fused_ordering(203) 00:13:18.468 fused_ordering(204) 00:13:18.468 fused_ordering(205) 00:13:18.733 fused_ordering(206) 00:13:18.733 fused_ordering(207) 00:13:18.733 fused_ordering(208) 00:13:18.733 fused_ordering(209) 00:13:18.733 fused_ordering(210) 00:13:18.733 fused_ordering(211) 00:13:18.733 fused_ordering(212) 00:13:18.733 fused_ordering(213) 00:13:18.733 fused_ordering(214) 00:13:18.733 fused_ordering(215) 00:13:18.733 fused_ordering(216) 00:13:18.733 fused_ordering(217) 00:13:18.733 fused_ordering(218) 00:13:18.733 fused_ordering(219) 00:13:18.733 fused_ordering(220) 00:13:18.733 fused_ordering(221) 00:13:18.733 fused_ordering(222) 00:13:18.733 fused_ordering(223) 00:13:18.733 fused_ordering(224) 00:13:18.733 fused_ordering(225) 00:13:18.733 fused_ordering(226) 00:13:18.733 fused_ordering(227) 00:13:18.733 fused_ordering(228) 00:13:18.733 fused_ordering(229) 00:13:18.733 fused_ordering(230) 00:13:18.733 fused_ordering(231) 00:13:18.733 fused_ordering(232) 00:13:18.733 fused_ordering(233) 00:13:18.733 fused_ordering(234) 00:13:18.733 fused_ordering(235) 00:13:18.733 fused_ordering(236) 00:13:18.733 fused_ordering(237) 00:13:18.733 fused_ordering(238) 00:13:18.733 fused_ordering(239) 00:13:18.733 fused_ordering(240) 00:13:18.733 fused_ordering(241) 00:13:18.733 fused_ordering(242) 00:13:18.733 fused_ordering(243) 00:13:18.733 fused_ordering(244) 00:13:18.733 fused_ordering(245) 00:13:18.733 fused_ordering(246) 00:13:18.733 fused_ordering(247) 00:13:18.733 fused_ordering(248) 00:13:18.733 fused_ordering(249) 00:13:18.733 fused_ordering(250) 00:13:18.733 fused_ordering(251) 00:13:18.733 fused_ordering(252) 00:13:18.733 fused_ordering(253) 00:13:18.733 fused_ordering(254) 00:13:18.733 fused_ordering(255) 00:13:18.733 fused_ordering(256) 00:13:18.733 fused_ordering(257) 00:13:18.733 fused_ordering(258) 00:13:18.733 fused_ordering(259) 00:13:18.733 fused_ordering(260) 00:13:18.733 fused_ordering(261) 00:13:18.733 fused_ordering(262) 00:13:18.733 fused_ordering(263) 00:13:18.733 fused_ordering(264) 00:13:18.733 fused_ordering(265) 00:13:18.733 fused_ordering(266) 00:13:18.733 fused_ordering(267) 00:13:18.733 fused_ordering(268) 00:13:18.733 fused_ordering(269) 00:13:18.733 fused_ordering(270) 00:13:18.733 fused_ordering(271) 00:13:18.733 fused_ordering(272) 00:13:18.733 fused_ordering(273) 00:13:18.733 fused_ordering(274) 00:13:18.733 fused_ordering(275) 00:13:18.733 fused_ordering(276) 00:13:18.733 fused_ordering(277) 00:13:18.733 fused_ordering(278) 00:13:18.733 fused_ordering(279) 00:13:18.733 fused_ordering(280) 00:13:18.733 fused_ordering(281) 00:13:18.733 fused_ordering(282) 00:13:18.733 fused_ordering(283) 00:13:18.733 fused_ordering(284) 00:13:18.733 fused_ordering(285) 00:13:18.733 fused_ordering(286) 00:13:18.733 fused_ordering(287) 00:13:18.733 fused_ordering(288) 00:13:18.733 fused_ordering(289) 00:13:18.733 fused_ordering(290) 00:13:18.733 fused_ordering(291) 00:13:18.733 fused_ordering(292) 00:13:18.733 fused_ordering(293) 00:13:18.733 fused_ordering(294) 00:13:18.733 fused_ordering(295) 00:13:18.733 fused_ordering(296) 00:13:18.733 fused_ordering(297) 00:13:18.733 fused_ordering(298) 00:13:18.733 fused_ordering(299) 00:13:18.733 fused_ordering(300) 00:13:18.733 fused_ordering(301) 00:13:18.733 fused_ordering(302) 00:13:18.733 fused_ordering(303) 00:13:18.733 fused_ordering(304) 00:13:18.733 fused_ordering(305) 00:13:18.733 fused_ordering(306) 00:13:18.733 fused_ordering(307) 00:13:18.733 fused_ordering(308) 00:13:18.733 fused_ordering(309) 00:13:18.733 fused_ordering(310) 00:13:18.733 fused_ordering(311) 00:13:18.733 fused_ordering(312) 00:13:18.733 fused_ordering(313) 00:13:18.733 fused_ordering(314) 00:13:18.733 fused_ordering(315) 00:13:18.733 fused_ordering(316) 00:13:18.733 fused_ordering(317) 00:13:18.733 fused_ordering(318) 00:13:18.733 fused_ordering(319) 00:13:18.733 fused_ordering(320) 00:13:18.733 fused_ordering(321) 00:13:18.733 fused_ordering(322) 00:13:18.733 fused_ordering(323) 00:13:18.733 fused_ordering(324) 00:13:18.733 fused_ordering(325) 00:13:18.733 fused_ordering(326) 00:13:18.733 fused_ordering(327) 00:13:18.733 fused_ordering(328) 00:13:18.733 fused_ordering(329) 00:13:18.733 fused_ordering(330) 00:13:18.733 fused_ordering(331) 00:13:18.733 fused_ordering(332) 00:13:18.733 fused_ordering(333) 00:13:18.733 fused_ordering(334) 00:13:18.733 fused_ordering(335) 00:13:18.733 fused_ordering(336) 00:13:18.733 fused_ordering(337) 00:13:18.733 fused_ordering(338) 00:13:18.733 fused_ordering(339) 00:13:18.733 fused_ordering(340) 00:13:18.733 fused_ordering(341) 00:13:18.733 fused_ordering(342) 00:13:18.733 fused_ordering(343) 00:13:18.733 fused_ordering(344) 00:13:18.733 fused_ordering(345) 00:13:18.733 fused_ordering(346) 00:13:18.733 fused_ordering(347) 00:13:18.733 fused_ordering(348) 00:13:18.733 fused_ordering(349) 00:13:18.733 fused_ordering(350) 00:13:18.733 fused_ordering(351) 00:13:18.733 fused_ordering(352) 00:13:18.733 fused_ordering(353) 00:13:18.733 fused_ordering(354) 00:13:18.733 fused_ordering(355) 00:13:18.733 fused_ordering(356) 00:13:18.733 fused_ordering(357) 00:13:18.733 fused_ordering(358) 00:13:18.733 fused_ordering(359) 00:13:18.733 fused_ordering(360) 00:13:18.733 fused_ordering(361) 00:13:18.733 fused_ordering(362) 00:13:18.733 fused_ordering(363) 00:13:18.733 fused_ordering(364) 00:13:18.733 fused_ordering(365) 00:13:18.733 fused_ordering(366) 00:13:18.733 fused_ordering(367) 00:13:18.733 fused_ordering(368) 00:13:18.733 fused_ordering(369) 00:13:18.733 fused_ordering(370) 00:13:18.733 fused_ordering(371) 00:13:18.733 fused_ordering(372) 00:13:18.733 fused_ordering(373) 00:13:18.733 fused_ordering(374) 00:13:18.733 fused_ordering(375) 00:13:18.733 fused_ordering(376) 00:13:18.733 fused_ordering(377) 00:13:18.733 fused_ordering(378) 00:13:18.733 fused_ordering(379) 00:13:18.733 fused_ordering(380) 00:13:18.733 fused_ordering(381) 00:13:18.733 fused_ordering(382) 00:13:18.733 fused_ordering(383) 00:13:18.733 fused_ordering(384) 00:13:18.733 fused_ordering(385) 00:13:18.733 fused_ordering(386) 00:13:18.733 fused_ordering(387) 00:13:18.733 fused_ordering(388) 00:13:18.733 fused_ordering(389) 00:13:18.733 fused_ordering(390) 00:13:18.733 fused_ordering(391) 00:13:18.733 fused_ordering(392) 00:13:18.733 fused_ordering(393) 00:13:18.733 fused_ordering(394) 00:13:18.733 fused_ordering(395) 00:13:18.733 fused_ordering(396) 00:13:18.733 fused_ordering(397) 00:13:18.733 fused_ordering(398) 00:13:18.733 fused_ordering(399) 00:13:18.733 fused_ordering(400) 00:13:18.733 fused_ordering(401) 00:13:18.733 fused_ordering(402) 00:13:18.733 fused_ordering(403) 00:13:18.733 fused_ordering(404) 00:13:18.733 fused_ordering(405) 00:13:18.733 fused_ordering(406) 00:13:18.733 fused_ordering(407) 00:13:18.733 fused_ordering(408) 00:13:18.733 fused_ordering(409) 00:13:18.733 fused_ordering(410) 00:13:18.993 fused_ordering(411) 00:13:18.993 fused_ordering(412) 00:13:18.993 fused_ordering(413) 00:13:18.993 fused_ordering(414) 00:13:18.993 fused_ordering(415) 00:13:18.993 fused_ordering(416) 00:13:18.993 fused_ordering(417) 00:13:18.993 fused_ordering(418) 00:13:18.993 fused_ordering(419) 00:13:18.993 fused_ordering(420) 00:13:18.993 fused_ordering(421) 00:13:18.993 fused_ordering(422) 00:13:18.993 fused_ordering(423) 00:13:18.993 fused_ordering(424) 00:13:18.993 fused_ordering(425) 00:13:18.993 fused_ordering(426) 00:13:18.993 fused_ordering(427) 00:13:18.993 fused_ordering(428) 00:13:18.993 fused_ordering(429) 00:13:18.993 fused_ordering(430) 00:13:18.993 fused_ordering(431) 00:13:18.993 fused_ordering(432) 00:13:18.993 fused_ordering(433) 00:13:18.993 fused_ordering(434) 00:13:18.993 fused_ordering(435) 00:13:18.993 fused_ordering(436) 00:13:18.993 fused_ordering(437) 00:13:18.993 fused_ordering(438) 00:13:18.993 fused_ordering(439) 00:13:18.993 fused_ordering(440) 00:13:18.993 fused_ordering(441) 00:13:18.993 fused_ordering(442) 00:13:18.993 fused_ordering(443) 00:13:18.993 fused_ordering(444) 00:13:18.993 fused_ordering(445) 00:13:18.993 fused_ordering(446) 00:13:18.993 fused_ordering(447) 00:13:18.993 fused_ordering(448) 00:13:18.993 fused_ordering(449) 00:13:18.993 fused_ordering(450) 00:13:18.993 fused_ordering(451) 00:13:18.993 fused_ordering(452) 00:13:18.993 fused_ordering(453) 00:13:18.993 fused_ordering(454) 00:13:18.993 fused_ordering(455) 00:13:18.993 fused_ordering(456) 00:13:18.993 fused_ordering(457) 00:13:18.993 fused_ordering(458) 00:13:18.993 fused_ordering(459) 00:13:18.993 fused_ordering(460) 00:13:18.993 fused_ordering(461) 00:13:18.993 fused_ordering(462) 00:13:18.993 fused_ordering(463) 00:13:18.993 fused_ordering(464) 00:13:18.993 fused_ordering(465) 00:13:18.993 fused_ordering(466) 00:13:18.993 fused_ordering(467) 00:13:18.993 fused_ordering(468) 00:13:18.993 fused_ordering(469) 00:13:18.993 fused_ordering(470) 00:13:18.993 fused_ordering(471) 00:13:18.993 fused_ordering(472) 00:13:18.993 fused_ordering(473) 00:13:18.993 fused_ordering(474) 00:13:18.993 fused_ordering(475) 00:13:18.993 fused_ordering(476) 00:13:18.993 fused_ordering(477) 00:13:18.993 fused_ordering(478) 00:13:18.993 fused_ordering(479) 00:13:18.993 fused_ordering(480) 00:13:18.993 fused_ordering(481) 00:13:18.993 fused_ordering(482) 00:13:18.993 fused_ordering(483) 00:13:18.993 fused_ordering(484) 00:13:18.993 fused_ordering(485) 00:13:18.993 fused_ordering(486) 00:13:18.993 fused_ordering(487) 00:13:18.993 fused_ordering(488) 00:13:18.993 fused_ordering(489) 00:13:18.993 fused_ordering(490) 00:13:18.993 fused_ordering(491) 00:13:18.993 fused_ordering(492) 00:13:18.993 fused_ordering(493) 00:13:18.993 fused_ordering(494) 00:13:18.993 fused_ordering(495) 00:13:18.993 fused_ordering(496) 00:13:18.993 fused_ordering(497) 00:13:18.993 fused_ordering(498) 00:13:18.993 fused_ordering(499) 00:13:18.993 fused_ordering(500) 00:13:18.993 fused_ordering(501) 00:13:18.993 fused_ordering(502) 00:13:18.993 fused_ordering(503) 00:13:18.993 fused_ordering(504) 00:13:18.993 fused_ordering(505) 00:13:18.993 fused_ordering(506) 00:13:18.993 fused_ordering(507) 00:13:18.993 fused_ordering(508) 00:13:18.993 fused_ordering(509) 00:13:18.993 fused_ordering(510) 00:13:18.993 fused_ordering(511) 00:13:18.993 fused_ordering(512) 00:13:18.993 fused_ordering(513) 00:13:18.993 fused_ordering(514) 00:13:18.993 fused_ordering(515) 00:13:18.993 fused_ordering(516) 00:13:18.993 fused_ordering(517) 00:13:18.993 fused_ordering(518) 00:13:18.993 fused_ordering(519) 00:13:18.993 fused_ordering(520) 00:13:18.993 fused_ordering(521) 00:13:18.993 fused_ordering(522) 00:13:18.993 fused_ordering(523) 00:13:18.993 fused_ordering(524) 00:13:18.993 fused_ordering(525) 00:13:18.993 fused_ordering(526) 00:13:18.993 fused_ordering(527) 00:13:18.993 fused_ordering(528) 00:13:18.993 fused_ordering(529) 00:13:18.993 fused_ordering(530) 00:13:18.993 fused_ordering(531) 00:13:18.993 fused_ordering(532) 00:13:18.993 fused_ordering(533) 00:13:18.993 fused_ordering(534) 00:13:18.993 fused_ordering(535) 00:13:18.993 fused_ordering(536) 00:13:18.993 fused_ordering(537) 00:13:18.993 fused_ordering(538) 00:13:18.993 fused_ordering(539) 00:13:18.993 fused_ordering(540) 00:13:18.993 fused_ordering(541) 00:13:18.993 fused_ordering(542) 00:13:18.993 fused_ordering(543) 00:13:18.993 fused_ordering(544) 00:13:18.993 fused_ordering(545) 00:13:18.993 fused_ordering(546) 00:13:18.993 fused_ordering(547) 00:13:18.993 fused_ordering(548) 00:13:18.993 fused_ordering(549) 00:13:18.993 fused_ordering(550) 00:13:18.993 fused_ordering(551) 00:13:18.993 fused_ordering(552) 00:13:18.993 fused_ordering(553) 00:13:18.993 fused_ordering(554) 00:13:18.993 fused_ordering(555) 00:13:18.993 fused_ordering(556) 00:13:18.993 fused_ordering(557) 00:13:18.993 fused_ordering(558) 00:13:18.993 fused_ordering(559) 00:13:18.993 fused_ordering(560) 00:13:18.993 fused_ordering(561) 00:13:18.993 fused_ordering(562) 00:13:18.993 fused_ordering(563) 00:13:18.993 fused_ordering(564) 00:13:18.993 fused_ordering(565) 00:13:18.993 fused_ordering(566) 00:13:18.993 fused_ordering(567) 00:13:18.993 fused_ordering(568) 00:13:18.993 fused_ordering(569) 00:13:18.993 fused_ordering(570) 00:13:18.993 fused_ordering(571) 00:13:18.993 fused_ordering(572) 00:13:18.993 fused_ordering(573) 00:13:18.993 fused_ordering(574) 00:13:18.993 fused_ordering(575) 00:13:18.993 fused_ordering(576) 00:13:18.993 fused_ordering(577) 00:13:18.993 fused_ordering(578) 00:13:18.993 fused_ordering(579) 00:13:18.993 fused_ordering(580) 00:13:18.993 fused_ordering(581) 00:13:18.993 fused_ordering(582) 00:13:18.993 fused_ordering(583) 00:13:18.993 fused_ordering(584) 00:13:18.993 fused_ordering(585) 00:13:18.993 fused_ordering(586) 00:13:18.993 fused_ordering(587) 00:13:18.993 fused_ordering(588) 00:13:18.993 fused_ordering(589) 00:13:18.993 fused_ordering(590) 00:13:18.994 fused_ordering(591) 00:13:18.994 fused_ordering(592) 00:13:18.994 fused_ordering(593) 00:13:18.994 fused_ordering(594) 00:13:18.994 fused_ordering(595) 00:13:18.994 fused_ordering(596) 00:13:18.994 fused_ordering(597) 00:13:18.994 fused_ordering(598) 00:13:18.994 fused_ordering(599) 00:13:18.994 fused_ordering(600) 00:13:18.994 fused_ordering(601) 00:13:18.994 fused_ordering(602) 00:13:18.994 fused_ordering(603) 00:13:18.994 fused_ordering(604) 00:13:18.994 fused_ordering(605) 00:13:18.994 fused_ordering(606) 00:13:18.994 fused_ordering(607) 00:13:18.994 fused_ordering(608) 00:13:18.994 fused_ordering(609) 00:13:18.994 fused_ordering(610) 00:13:18.994 fused_ordering(611) 00:13:18.994 fused_ordering(612) 00:13:18.994 fused_ordering(613) 00:13:18.994 fused_ordering(614) 00:13:18.994 fused_ordering(615) 00:13:19.252 fused_ordering(616) 00:13:19.252 fused_ordering(617) 00:13:19.252 fused_ordering(618) 00:13:19.252 fused_ordering(619) 00:13:19.252 fused_ordering(620) 00:13:19.252 fused_ordering(621) 00:13:19.252 fused_ordering(622) 00:13:19.252 fused_ordering(623) 00:13:19.252 fused_ordering(624) 00:13:19.252 fused_ordering(625) 00:13:19.252 fused_ordering(626) 00:13:19.252 fused_ordering(627) 00:13:19.252 fused_ordering(628) 00:13:19.252 fused_ordering(629) 00:13:19.252 fused_ordering(630) 00:13:19.252 fused_ordering(631) 00:13:19.252 fused_ordering(632) 00:13:19.252 fused_ordering(633) 00:13:19.252 fused_ordering(634) 00:13:19.252 fused_ordering(635) 00:13:19.252 fused_ordering(636) 00:13:19.252 fused_ordering(637) 00:13:19.252 fused_ordering(638) 00:13:19.252 fused_ordering(639) 00:13:19.252 fused_ordering(640) 00:13:19.252 fused_ordering(641) 00:13:19.252 fused_ordering(642) 00:13:19.252 fused_ordering(643) 00:13:19.252 fused_ordering(644) 00:13:19.252 fused_ordering(645) 00:13:19.252 fused_ordering(646) 00:13:19.252 fused_ordering(647) 00:13:19.252 fused_ordering(648) 00:13:19.252 fused_ordering(649) 00:13:19.252 fused_ordering(650) 00:13:19.252 fused_ordering(651) 00:13:19.252 fused_ordering(652) 00:13:19.252 fused_ordering(653) 00:13:19.252 fused_ordering(654) 00:13:19.252 fused_ordering(655) 00:13:19.252 fused_ordering(656) 00:13:19.252 fused_ordering(657) 00:13:19.252 fused_ordering(658) 00:13:19.252 fused_ordering(659) 00:13:19.252 fused_ordering(660) 00:13:19.252 fused_ordering(661) 00:13:19.252 fused_ordering(662) 00:13:19.252 fused_ordering(663) 00:13:19.252 fused_ordering(664) 00:13:19.252 fused_ordering(665) 00:13:19.252 fused_ordering(666) 00:13:19.252 fused_ordering(667) 00:13:19.252 fused_ordering(668) 00:13:19.252 fused_ordering(669) 00:13:19.252 fused_ordering(670) 00:13:19.253 fused_ordering(671) 00:13:19.253 fused_ordering(672) 00:13:19.253 fused_ordering(673) 00:13:19.253 fused_ordering(674) 00:13:19.253 fused_ordering(675) 00:13:19.253 fused_ordering(676) 00:13:19.253 fused_ordering(677) 00:13:19.253 fused_ordering(678) 00:13:19.253 fused_ordering(679) 00:13:19.253 fused_ordering(680) 00:13:19.253 fused_ordering(681) 00:13:19.253 fused_ordering(682) 00:13:19.253 fused_ordering(683) 00:13:19.253 fused_ordering(684) 00:13:19.253 fused_ordering(685) 00:13:19.253 fused_ordering(686) 00:13:19.253 fused_ordering(687) 00:13:19.253 fused_ordering(688) 00:13:19.253 fused_ordering(689) 00:13:19.253 fused_ordering(690) 00:13:19.253 fused_ordering(691) 00:13:19.253 fused_ordering(692) 00:13:19.253 fused_ordering(693) 00:13:19.253 fused_ordering(694) 00:13:19.253 fused_ordering(695) 00:13:19.253 fused_ordering(696) 00:13:19.253 fused_ordering(697) 00:13:19.253 fused_ordering(698) 00:13:19.253 fused_ordering(699) 00:13:19.253 fused_ordering(700) 00:13:19.253 fused_ordering(701) 00:13:19.253 fused_ordering(702) 00:13:19.253 fused_ordering(703) 00:13:19.253 fused_ordering(704) 00:13:19.253 fused_ordering(705) 00:13:19.253 fused_ordering(706) 00:13:19.253 fused_ordering(707) 00:13:19.253 fused_ordering(708) 00:13:19.253 fused_ordering(709) 00:13:19.253 fused_ordering(710) 00:13:19.253 fused_ordering(711) 00:13:19.253 fused_ordering(712) 00:13:19.253 fused_ordering(713) 00:13:19.253 fused_ordering(714) 00:13:19.253 fused_ordering(715) 00:13:19.253 fused_ordering(716) 00:13:19.253 fused_ordering(717) 00:13:19.253 fused_ordering(718) 00:13:19.253 fused_ordering(719) 00:13:19.253 fused_ordering(720) 00:13:19.253 fused_ordering(721) 00:13:19.253 fused_ordering(722) 00:13:19.253 fused_ordering(723) 00:13:19.253 fused_ordering(724) 00:13:19.253 fused_ordering(725) 00:13:19.253 fused_ordering(726) 00:13:19.253 fused_ordering(727) 00:13:19.253 fused_ordering(728) 00:13:19.253 fused_ordering(729) 00:13:19.253 fused_ordering(730) 00:13:19.253 fused_ordering(731) 00:13:19.253 fused_ordering(732) 00:13:19.253 fused_ordering(733) 00:13:19.253 fused_ordering(734) 00:13:19.253 fused_ordering(735) 00:13:19.253 fused_ordering(736) 00:13:19.253 fused_ordering(737) 00:13:19.253 fused_ordering(738) 00:13:19.253 fused_ordering(739) 00:13:19.253 fused_ordering(740) 00:13:19.253 fused_ordering(741) 00:13:19.253 fused_ordering(742) 00:13:19.253 fused_ordering(743) 00:13:19.253 fused_ordering(744) 00:13:19.253 fused_ordering(745) 00:13:19.253 fused_ordering(746) 00:13:19.253 fused_ordering(747) 00:13:19.253 fused_ordering(748) 00:13:19.253 fused_ordering(749) 00:13:19.253 fused_ordering(750) 00:13:19.253 fused_ordering(751) 00:13:19.253 fused_ordering(752) 00:13:19.253 fused_ordering(753) 00:13:19.253 fused_ordering(754) 00:13:19.253 fused_ordering(755) 00:13:19.253 fused_ordering(756) 00:13:19.253 fused_ordering(757) 00:13:19.253 fused_ordering(758) 00:13:19.253 fused_ordering(759) 00:13:19.253 fused_ordering(760) 00:13:19.253 fused_ordering(761) 00:13:19.253 fused_ordering(762) 00:13:19.253 fused_ordering(763) 00:13:19.253 fused_ordering(764) 00:13:19.253 fused_ordering(765) 00:13:19.253 fused_ordering(766) 00:13:19.253 fused_ordering(767) 00:13:19.253 fused_ordering(768) 00:13:19.253 fused_ordering(769) 00:13:19.253 fused_ordering(770) 00:13:19.253 fused_ordering(771) 00:13:19.253 fused_ordering(772) 00:13:19.253 fused_ordering(773) 00:13:19.253 fused_ordering(774) 00:13:19.253 fused_ordering(775) 00:13:19.253 fused_ordering(776) 00:13:19.253 fused_ordering(777) 00:13:19.253 fused_ordering(778) 00:13:19.253 fused_ordering(779) 00:13:19.253 fused_ordering(780) 00:13:19.253 fused_ordering(781) 00:13:19.253 fused_ordering(782) 00:13:19.253 fused_ordering(783) 00:13:19.253 fused_ordering(784) 00:13:19.253 fused_ordering(785) 00:13:19.253 fused_ordering(786) 00:13:19.253 fused_ordering(787) 00:13:19.253 fused_ordering(788) 00:13:19.253 fused_ordering(789) 00:13:19.253 fused_ordering(790) 00:13:19.253 fused_ordering(791) 00:13:19.253 fused_ordering(792) 00:13:19.253 fused_ordering(793) 00:13:19.253 fused_ordering(794) 00:13:19.253 fused_ordering(795) 00:13:19.253 fused_ordering(796) 00:13:19.253 fused_ordering(797) 00:13:19.253 fused_ordering(798) 00:13:19.253 fused_ordering(799) 00:13:19.253 fused_ordering(800) 00:13:19.253 fused_ordering(801) 00:13:19.253 fused_ordering(802) 00:13:19.253 fused_ordering(803) 00:13:19.253 fused_ordering(804) 00:13:19.253 fused_ordering(805) 00:13:19.253 fused_ordering(806) 00:13:19.253 fused_ordering(807) 00:13:19.253 fused_ordering(808) 00:13:19.253 fused_ordering(809) 00:13:19.253 fused_ordering(810) 00:13:19.253 fused_ordering(811) 00:13:19.253 fused_ordering(812) 00:13:19.253 fused_ordering(813) 00:13:19.253 fused_ordering(814) 00:13:19.253 fused_ordering(815) 00:13:19.253 fused_ordering(816) 00:13:19.253 fused_ordering(817) 00:13:19.253 fused_ordering(818) 00:13:19.253 fused_ordering(819) 00:13:19.253 fused_ordering(820) 00:13:19.821 fused_ordering(821) 00:13:19.821 fused_ordering(822) 00:13:19.821 fused_ordering(823) 00:13:19.821 fused_ordering(824) 00:13:19.821 fused_ordering(825) 00:13:19.821 fused_ordering(826) 00:13:19.821 fused_ordering(827) 00:13:19.821 fused_ordering(828) 00:13:19.821 fused_ordering(829) 00:13:19.821 fused_ordering(830) 00:13:19.821 fused_ordering(831) 00:13:19.821 fused_ordering(832) 00:13:19.821 fused_ordering(833) 00:13:19.821 fused_ordering(834) 00:13:19.821 fused_ordering(835) 00:13:19.821 fused_ordering(836) 00:13:19.821 fused_ordering(837) 00:13:19.821 fused_ordering(838) 00:13:19.821 fused_ordering(839) 00:13:19.821 fused_ordering(840) 00:13:19.821 fused_ordering(841) 00:13:19.821 fused_ordering(842) 00:13:19.821 fused_ordering(843) 00:13:19.821 fused_ordering(844) 00:13:19.821 fused_ordering(845) 00:13:19.821 fused_ordering(846) 00:13:19.821 fused_ordering(847) 00:13:19.821 fused_ordering(848) 00:13:19.821 fused_ordering(849) 00:13:19.821 fused_ordering(850) 00:13:19.821 fused_ordering(851) 00:13:19.821 fused_ordering(852) 00:13:19.821 fused_ordering(853) 00:13:19.821 fused_ordering(854) 00:13:19.821 fused_ordering(855) 00:13:19.821 fused_ordering(856) 00:13:19.821 fused_ordering(857) 00:13:19.821 fused_ordering(858) 00:13:19.821 fused_ordering(859) 00:13:19.821 fused_ordering(860) 00:13:19.821 fused_ordering(861) 00:13:19.821 fused_ordering(862) 00:13:19.821 fused_ordering(863) 00:13:19.821 fused_ordering(864) 00:13:19.821 fused_ordering(865) 00:13:19.821 fused_ordering(866) 00:13:19.821 fused_ordering(867) 00:13:19.821 fused_ordering(868) 00:13:19.821 fused_ordering(869) 00:13:19.821 fused_ordering(870) 00:13:19.821 fused_ordering(871) 00:13:19.821 fused_ordering(872) 00:13:19.821 fused_ordering(873) 00:13:19.821 fused_ordering(874) 00:13:19.821 fused_ordering(875) 00:13:19.821 fused_ordering(876) 00:13:19.821 fused_ordering(877) 00:13:19.821 fused_ordering(878) 00:13:19.821 fused_ordering(879) 00:13:19.821 fused_ordering(880) 00:13:19.821 fused_ordering(881) 00:13:19.821 fused_ordering(882) 00:13:19.821 fused_ordering(883) 00:13:19.821 fused_ordering(884) 00:13:19.821 fused_ordering(885) 00:13:19.821 fused_ordering(886) 00:13:19.821 fused_ordering(887) 00:13:19.821 fused_ordering(888) 00:13:19.821 fused_ordering(889) 00:13:19.821 fused_ordering(890) 00:13:19.821 fused_ordering(891) 00:13:19.821 fused_ordering(892) 00:13:19.821 fused_ordering(893) 00:13:19.821 fused_ordering(894) 00:13:19.821 fused_ordering(895) 00:13:19.821 fused_ordering(896) 00:13:19.821 fused_ordering(897) 00:13:19.821 fused_ordering(898) 00:13:19.821 fused_ordering(899) 00:13:19.821 fused_ordering(900) 00:13:19.821 fused_ordering(901) 00:13:19.821 fused_ordering(902) 00:13:19.821 fused_ordering(903) 00:13:19.821 fused_ordering(904) 00:13:19.821 fused_ordering(905) 00:13:19.821 fused_ordering(906) 00:13:19.821 fused_ordering(907) 00:13:19.821 fused_ordering(908) 00:13:19.821 fused_ordering(909) 00:13:19.821 fused_ordering(910) 00:13:19.821 fused_ordering(911) 00:13:19.821 fused_ordering(912) 00:13:19.821 fused_ordering(913) 00:13:19.821 fused_ordering(914) 00:13:19.821 fused_ordering(915) 00:13:19.821 fused_ordering(916) 00:13:19.821 fused_ordering(917) 00:13:19.821 fused_ordering(918) 00:13:19.821 fused_ordering(919) 00:13:19.821 fused_ordering(920) 00:13:19.821 fused_ordering(921) 00:13:19.821 fused_ordering(922) 00:13:19.821 fused_ordering(923) 00:13:19.821 fused_ordering(924) 00:13:19.821 fused_ordering(925) 00:13:19.821 fused_ordering(926) 00:13:19.821 fused_ordering(927) 00:13:19.821 fused_ordering(928) 00:13:19.821 fused_ordering(929) 00:13:19.821 fused_ordering(930) 00:13:19.821 fused_ordering(931) 00:13:19.821 fused_ordering(932) 00:13:19.821 fused_ordering(933) 00:13:19.821 fused_ordering(934) 00:13:19.821 fused_ordering(935) 00:13:19.821 fused_ordering(936) 00:13:19.821 fused_ordering(937) 00:13:19.821 fused_ordering(938) 00:13:19.821 fused_ordering(939) 00:13:19.821 fused_ordering(940) 00:13:19.821 fused_ordering(941) 00:13:19.821 fused_ordering(942) 00:13:19.821 fused_ordering(943) 00:13:19.821 fused_ordering(944) 00:13:19.821 fused_ordering(945) 00:13:19.821 fused_ordering(946) 00:13:19.821 fused_ordering(947) 00:13:19.821 fused_ordering(948) 00:13:19.821 fused_ordering(949) 00:13:19.821 fused_ordering(950) 00:13:19.821 fused_ordering(951) 00:13:19.821 fused_ordering(952) 00:13:19.821 fused_ordering(953) 00:13:19.821 fused_ordering(954) 00:13:19.821 fused_ordering(955) 00:13:19.821 fused_ordering(956) 00:13:19.821 fused_ordering(957) 00:13:19.821 fused_ordering(958) 00:13:19.821 fused_ordering(959) 00:13:19.821 fused_ordering(960) 00:13:19.821 fused_ordering(961) 00:13:19.821 fused_ordering(962) 00:13:19.821 fused_ordering(963) 00:13:19.821 fused_ordering(964) 00:13:19.821 fused_ordering(965) 00:13:19.821 fused_ordering(966) 00:13:19.821 fused_ordering(967) 00:13:19.821 fused_ordering(968) 00:13:19.821 fused_ordering(969) 00:13:19.821 fused_ordering(970) 00:13:19.821 fused_ordering(971) 00:13:19.821 fused_ordering(972) 00:13:19.821 fused_ordering(973) 00:13:19.821 fused_ordering(974) 00:13:19.821 fused_ordering(975) 00:13:19.821 fused_ordering(976) 00:13:19.821 fused_ordering(977) 00:13:19.821 fused_ordering(978) 00:13:19.821 fused_ordering(979) 00:13:19.821 fused_ordering(980) 00:13:19.821 fused_ordering(981) 00:13:19.821 fused_ordering(982) 00:13:19.821 fused_ordering(983) 00:13:19.821 fused_ordering(984) 00:13:19.821 fused_ordering(985) 00:13:19.821 fused_ordering(986) 00:13:19.821 fused_ordering(987) 00:13:19.821 fused_ordering(988) 00:13:19.821 fused_ordering(989) 00:13:19.821 fused_ordering(990) 00:13:19.821 fused_ordering(991) 00:13:19.821 fused_ordering(992) 00:13:19.821 fused_ordering(993) 00:13:19.821 fused_ordering(994) 00:13:19.821 fused_ordering(995) 00:13:19.821 fused_ordering(996) 00:13:19.821 fused_ordering(997) 00:13:19.821 fused_ordering(998) 00:13:19.821 fused_ordering(999) 00:13:19.821 fused_ordering(1000) 00:13:19.821 fused_ordering(1001) 00:13:19.821 fused_ordering(1002) 00:13:19.821 fused_ordering(1003) 00:13:19.821 fused_ordering(1004) 00:13:19.821 fused_ordering(1005) 00:13:19.821 fused_ordering(1006) 00:13:19.821 fused_ordering(1007) 00:13:19.821 fused_ordering(1008) 00:13:19.821 fused_ordering(1009) 00:13:19.821 fused_ordering(1010) 00:13:19.821 fused_ordering(1011) 00:13:19.821 fused_ordering(1012) 00:13:19.821 fused_ordering(1013) 00:13:19.821 fused_ordering(1014) 00:13:19.821 fused_ordering(1015) 00:13:19.821 fused_ordering(1016) 00:13:19.821 fused_ordering(1017) 00:13:19.821 fused_ordering(1018) 00:13:19.821 fused_ordering(1019) 00:13:19.821 fused_ordering(1020) 00:13:19.821 fused_ordering(1021) 00:13:19.821 fused_ordering(1022) 00:13:19.821 fused_ordering(1023) 00:13:19.821 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:19.821 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:19.821 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:19.821 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:19.822 rmmod nvme_tcp 00:13:19.822 rmmod nvme_fabrics 00:13:19.822 rmmod nvme_keyring 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1852713 ']' 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1852713 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1852713 ']' 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1852713 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1852713 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1852713' 00:13:19.822 killing process with pid 1852713 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1852713 00:13:19.822 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1852713 00:13:20.081 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:20.081 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:20.081 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:20.081 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:20.081 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:20.081 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:20.081 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:20.081 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:20.081 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:20.081 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.081 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.081 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.987 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:21.987 00:13:21.987 real 0m10.620s 00:13:21.987 user 0m4.933s 00:13:21.987 sys 0m5.809s 00:13:21.987 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.987 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.987 ************************************ 00:13:21.987 END TEST nvmf_fused_ordering 00:13:21.987 ************************************ 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:22.247 ************************************ 00:13:22.247 START TEST nvmf_ns_masking 00:13:22.247 ************************************ 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:22.247 * Looking for test storage... 00:13:22.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:22.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.247 --rc genhtml_branch_coverage=1 00:13:22.247 --rc genhtml_function_coverage=1 00:13:22.247 --rc genhtml_legend=1 00:13:22.247 --rc geninfo_all_blocks=1 00:13:22.247 --rc geninfo_unexecuted_blocks=1 00:13:22.247 00:13:22.247 ' 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:22.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.247 --rc genhtml_branch_coverage=1 00:13:22.247 --rc genhtml_function_coverage=1 00:13:22.247 --rc genhtml_legend=1 00:13:22.247 --rc geninfo_all_blocks=1 00:13:22.247 --rc geninfo_unexecuted_blocks=1 00:13:22.247 00:13:22.247 ' 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:22.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.247 --rc genhtml_branch_coverage=1 00:13:22.247 --rc genhtml_function_coverage=1 00:13:22.247 --rc genhtml_legend=1 00:13:22.247 --rc geninfo_all_blocks=1 00:13:22.247 --rc geninfo_unexecuted_blocks=1 00:13:22.247 00:13:22.247 ' 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:22.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.247 --rc genhtml_branch_coverage=1 00:13:22.247 --rc genhtml_function_coverage=1 00:13:22.247 --rc genhtml_legend=1 00:13:22.247 --rc geninfo_all_blocks=1 00:13:22.247 --rc geninfo_unexecuted_blocks=1 00:13:22.247 00:13:22.247 ' 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.247 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.248 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=93fa4289-2484-40c4-a419-78c60fc570ab 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=81a5324c-5474-41af-9684-c3e06f5cbbca 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=3f7c7e9e-8274-45d7-84db-4ed20c64469a 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:22.507 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:22.508 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.508 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:22.508 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:22.508 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:22.508 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.508 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.508 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.508 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:22.508 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:22.508 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:22.508 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:29.079 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:29.079 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.079 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:29.080 Found net devices under 0000:af:00.0: cvl_0_0 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:29.080 Found net devices under 0000:af:00.1: cvl_0_1 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:29.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:13:29.080 00:13:29.080 --- 10.0.0.2 ping statistics --- 00:13:29.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.080 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:13:29.080 00:13:29.080 --- 10.0.0.1 ping statistics --- 00:13:29.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.080 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1856586 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1856586 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1856586 ']' 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.080 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:29.080 [2024-12-09 17:23:54.792094] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:13:29.080 [2024-12-09 17:23:54.792137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.080 [2024-12-09 17:23:54.868152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.080 [2024-12-09 17:23:54.906675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.080 [2024-12-09 17:23:54.906710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.080 [2024-12-09 17:23:54.906718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.080 [2024-12-09 17:23:54.906724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.080 [2024-12-09 17:23:54.906730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.080 [2024-12-09 17:23:54.907231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.080 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.080 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:29.080 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:29.080 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:29.080 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:29.080 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.080 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:29.080 [2024-12-09 17:23:55.206627] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.080 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:29.080 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:29.080 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:29.080 Malloc1 00:13:29.080 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:29.339 Malloc2 00:13:29.339 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:29.339 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:29.597 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.856 [2024-12-09 17:23:56.215626] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.856 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:29.856 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3f7c7e9e-8274-45d7-84db-4ed20c64469a -a 10.0.0.2 -s 4420 -i 4 00:13:30.115 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:30.115 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:30.115 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.115 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:30.115 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:32.019 [ 0]:0x1 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:32.019 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d8e6ec8bc2f42ba85f751e56c64befe 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d8e6ec8bc2f42ba85f751e56c64befe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.278 [ 0]:0x1 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d8e6ec8bc2f42ba85f751e56c64befe 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d8e6ec8bc2f42ba85f751e56c64befe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.278 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:32.278 [ 1]:0x2 00:13:32.536 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:32.536 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.536 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f7bfdd54d72493990dcd1a080d64a3e 00:13:32.536 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f7bfdd54d72493990dcd1a080d64a3e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.536 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:32.536 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.537 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.795 17:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:32.795 17:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:32.795 17:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3f7c7e9e-8274-45d7-84db-4ed20c64469a -a 10.0.0.2 -s 4420 -i 4 00:13:33.054 17:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:33.054 17:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:33.054 17:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.054 17:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:33.054 17:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:33.054 17:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:35.589 [ 0]:0x2 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f7bfdd54d72493990dcd1a080d64a3e 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f7bfdd54d72493990dcd1a080d64a3e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:35.589 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:35.589 [ 0]:0x1 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d8e6ec8bc2f42ba85f751e56c64befe 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d8e6ec8bc2f42ba85f751e56c64befe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:35.589 [ 1]:0x2 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f7bfdd54d72493990dcd1a080d64a3e 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f7bfdd54d72493990dcd1a080d64a3e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.589 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:35.847 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:35.847 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.848 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:36.106 [ 0]:0x2 00:13:36.107 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:36.107 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:36.107 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f7bfdd54d72493990dcd1a080d64a3e 00:13:36.107 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f7bfdd54d72493990dcd1a080d64a3e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:36.107 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:36.107 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.107 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:36.365 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:36.365 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3f7c7e9e-8274-45d7-84db-4ed20c64469a -a 10.0.0.2 -s 4420 -i 4 00:13:36.624 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:36.624 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:36.624 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.624 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:36.624 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:36.624 17:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.528 [ 0]:0x1 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.528 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.528 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d8e6ec8bc2f42ba85f751e56c64befe 00:13:38.528 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d8e6ec8bc2f42ba85f751e56c64befe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.528 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:38.528 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.528 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:38.528 [ 1]:0x2 00:13:38.528 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:38.529 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.786 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f7bfdd54d72493990dcd1a080d64a3e 00:13:38.786 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f7bfdd54d72493990dcd1a080d64a3e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.786 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:38.786 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:38.786 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:38.787 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:38.787 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:38.787 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.787 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:38.787 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.787 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:38.787 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.787 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.787 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.787 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:39.045 [ 0]:0x2 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f7bfdd54d72493990dcd1a080d64a3e 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f7bfdd54d72493990dcd1a080d64a3e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:39.045 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:39.045 [2024-12-09 17:24:05.565942] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:39.045 request: 00:13:39.045 { 00:13:39.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.045 "nsid": 2, 00:13:39.045 "host": "nqn.2016-06.io.spdk:host1", 00:13:39.045 "method": "nvmf_ns_remove_host", 00:13:39.045 "req_id": 1 00:13:39.045 } 00:13:39.045 Got JSON-RPC error response 00:13:39.045 response: 00:13:39.045 { 00:13:39.045 "code": -32602, 00:13:39.045 "message": "Invalid parameters" 00:13:39.045 } 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:39.304 [ 0]:0x2 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f7bfdd54d72493990dcd1a080d64a3e 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f7bfdd54d72493990dcd1a080d64a3e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:39.304 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.563 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1858719 00:13:39.563 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:39.563 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.563 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1858719 /var/tmp/host.sock 00:13:39.563 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1858719 ']' 00:13:39.563 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:39.563 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.563 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:39.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:39.563 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.563 17:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.563 [2024-12-09 17:24:05.944357] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:13:39.563 [2024-12-09 17:24:05.944402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858719 ] 00:13:39.563 [2024-12-09 17:24:06.017699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.563 [2024-12-09 17:24:06.056292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.822 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.822 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:39.822 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.080 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.338 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 93fa4289-2484-40c4-a419-78c60fc570ab 00:13:40.339 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:40.339 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 93FA4289248440C4A41978C60FC570AB -i 00:13:40.597 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 81a5324c-5474-41af-9684-c3e06f5cbbca 00:13:40.597 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:40.597 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 81A5324C547441AF9684C3E06F5CBBCA -i 00:13:40.597 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:40.855 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:41.113 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:41.113 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:41.372 nvme0n1 00:13:41.372 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:41.372 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:41.630 nvme1n2 00:13:41.630 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:41.630 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:41.630 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:41.630 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:41.630 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:41.888 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:41.888 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:41.888 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:41.888 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:42.146 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 93fa4289-2484-40c4-a419-78c60fc570ab == \9\3\f\a\4\2\8\9\-\2\4\8\4\-\4\0\c\4\-\a\4\1\9\-\7\8\c\6\0\f\c\5\7\0\a\b ]] 00:13:42.146 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:42.146 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:42.146 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:42.405 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 81a5324c-5474-41af-9684-c3e06f5cbbca == \8\1\a\5\3\2\4\c\-\5\4\7\4\-\4\1\a\f\-\9\6\8\4\-\c\3\e\0\6\f\5\c\b\b\c\a ]] 00:13:42.405 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.405 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 93fa4289-2484-40c4-a419-78c60fc570ab 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 93FA4289248440C4A41978C60FC570AB 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 93FA4289248440C4A41978C60FC570AB 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:42.664 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 93FA4289248440C4A41978C60FC570AB 00:13:42.923 [2024-12-09 17:24:09.304173] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:42.923 [2024-12-09 17:24:09.304205] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:42.924 [2024-12-09 17:24:09.304213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.924 request: 00:13:42.924 { 00:13:42.924 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.924 "namespace": { 00:13:42.924 "bdev_name": "invalid", 00:13:42.924 "nsid": 1, 00:13:42.924 "nguid": "93FA4289248440C4A41978C60FC570AB", 00:13:42.924 "no_auto_visible": false, 00:13:42.924 "hide_metadata": false 00:13:42.924 }, 00:13:42.924 "method": "nvmf_subsystem_add_ns", 00:13:42.924 "req_id": 1 00:13:42.924 } 00:13:42.924 Got JSON-RPC error response 00:13:42.924 response: 00:13:42.924 { 00:13:42.924 "code": -32602, 00:13:42.924 "message": "Invalid parameters" 00:13:42.924 } 00:13:42.924 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:42.924 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:42.924 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:42.924 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:42.924 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 93fa4289-2484-40c4-a419-78c60fc570ab 00:13:42.924 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:42.924 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 93FA4289248440C4A41978C60FC570AB -i 00:13:43.183 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:45.084 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:45.084 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:45.084 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:45.343 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:45.343 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1858719 00:13:45.343 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1858719 ']' 00:13:45.343 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1858719 00:13:45.343 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:45.343 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.343 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1858719 00:13:45.343 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:45.343 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:45.343 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1858719' 00:13:45.343 killing process with pid 1858719 00:13:45.343 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1858719 00:13:45.343 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1858719 00:13:45.602 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.861 rmmod nvme_tcp 00:13:45.861 rmmod nvme_fabrics 00:13:45.861 rmmod nvme_keyring 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1856586 ']' 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1856586 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1856586 ']' 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1856586 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1856586 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1856586' 00:13:45.861 killing process with pid 1856586 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1856586 00:13:45.861 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1856586 00:13:46.120 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:46.120 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:46.120 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:46.120 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:46.120 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:46.120 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:46.120 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:46.120 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:46.120 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:46.120 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.120 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.120 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.717 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:48.717 00:13:48.717 real 0m26.068s 00:13:48.717 user 0m31.215s 00:13:48.717 sys 0m7.055s 00:13:48.717 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.717 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:48.717 ************************************ 00:13:48.717 END TEST nvmf_ns_masking 00:13:48.717 ************************************ 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.718 ************************************ 00:13:48.718 START TEST nvmf_nvme_cli 00:13:48.718 ************************************ 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:48.718 * Looking for test storage... 00:13:48.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:48.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.718 --rc genhtml_branch_coverage=1 00:13:48.718 --rc genhtml_function_coverage=1 00:13:48.718 --rc genhtml_legend=1 00:13:48.718 --rc geninfo_all_blocks=1 00:13:48.718 --rc geninfo_unexecuted_blocks=1 00:13:48.718 00:13:48.718 ' 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:48.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.718 --rc genhtml_branch_coverage=1 00:13:48.718 --rc genhtml_function_coverage=1 00:13:48.718 --rc genhtml_legend=1 00:13:48.718 --rc geninfo_all_blocks=1 00:13:48.718 --rc geninfo_unexecuted_blocks=1 00:13:48.718 00:13:48.718 ' 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:48.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.718 --rc genhtml_branch_coverage=1 00:13:48.718 --rc genhtml_function_coverage=1 00:13:48.718 --rc genhtml_legend=1 00:13:48.718 --rc geninfo_all_blocks=1 00:13:48.718 --rc geninfo_unexecuted_blocks=1 00:13:48.718 00:13:48.718 ' 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:48.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.718 --rc genhtml_branch_coverage=1 00:13:48.718 --rc genhtml_function_coverage=1 00:13:48.718 --rc genhtml_legend=1 00:13:48.718 --rc geninfo_all_blocks=1 00:13:48.718 --rc geninfo_unexecuted_blocks=1 00:13:48.718 00:13:48.718 ' 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.718 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:48.719 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.056 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:54.057 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:54.057 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:54.057 Found net devices under 0000:af:00.0: cvl_0_0 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:54.057 Found net devices under 0000:af:00.1: cvl_0_1 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.057 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:54.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:13:54.317 00:13:54.317 --- 10.0.0.2 ping statistics --- 00:13:54.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.317 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:13:54.317 00:13:54.317 --- 10.0.0.1 ping statistics --- 00:13:54.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.317 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1863582 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1863582 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1863582 ']' 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.317 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.576 [2024-12-09 17:24:20.868009] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:13:54.576 [2024-12-09 17:24:20.868059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.576 [2024-12-09 17:24:20.948369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.576 [2024-12-09 17:24:20.989364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.576 [2024-12-09 17:24:20.989410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.576 [2024-12-09 17:24:20.989419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.576 [2024-12-09 17:24:20.989424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.576 [2024-12-09 17:24:20.989429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.576 [2024-12-09 17:24:20.990909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.576 [2024-12-09 17:24:20.991021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.576 [2024-12-09 17:24:20.991127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.576 [2024-12-09 17:24:20.991129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.576 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.576 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:54.576 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:54.576 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:54.576 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 [2024-12-09 17:24:21.136729] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 Malloc0 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 Malloc1 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 [2024-12-09 17:24:21.229397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:55.094 00:13:55.094 Discovery Log Number of Records 2, Generation counter 2 00:13:55.094 =====Discovery Log Entry 0====== 00:13:55.094 trtype: tcp 00:13:55.094 adrfam: ipv4 00:13:55.094 subtype: current discovery subsystem 00:13:55.094 treq: not required 00:13:55.094 portid: 0 00:13:55.094 trsvcid: 4420 00:13:55.094 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:55.094 traddr: 10.0.0.2 00:13:55.094 eflags: explicit discovery connections, duplicate discovery information 00:13:55.094 sectype: none 00:13:55.094 =====Discovery Log Entry 1====== 00:13:55.094 trtype: tcp 00:13:55.094 adrfam: ipv4 00:13:55.094 subtype: nvme subsystem 00:13:55.094 treq: not required 00:13:55.094 portid: 0 00:13:55.094 trsvcid: 4420 00:13:55.094 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:55.094 traddr: 10.0.0.2 00:13:55.094 eflags: none 00:13:55.094 sectype: none 00:13:55.094 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:55.094 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:55.094 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:55.094 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.094 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:55.094 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:55.094 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.094 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:55.094 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.094 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:55.094 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.471 17:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:56.471 17:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:56.471 17:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.471 17:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:56.471 17:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:56.471 17:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:58.375 /dev/nvme0n2 ]] 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:58.375 rmmod nvme_tcp 00:13:58.375 rmmod nvme_fabrics 00:13:58.375 rmmod nvme_keyring 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1863582 ']' 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1863582 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1863582 ']' 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1863582 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.375 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1863582 00:13:58.635 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.635 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.635 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1863582' 00:13:58.635 killing process with pid 1863582 00:13:58.635 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1863582 00:13:58.635 17:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1863582 00:13:58.635 17:24:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:58.635 17:24:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:58.635 17:24:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:58.635 17:24:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:58.635 17:24:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:58.635 17:24:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:58.635 17:24:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:58.635 17:24:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:58.635 17:24:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:58.635 17:24:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.635 17:24:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.635 17:24:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:01.170 00:14:01.170 real 0m12.472s 00:14:01.170 user 0m18.176s 00:14:01.170 sys 0m5.051s 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.170 ************************************ 00:14:01.170 END TEST nvmf_nvme_cli 00:14:01.170 ************************************ 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.170 ************************************ 00:14:01.170 START TEST nvmf_vfio_user 00:14:01.170 ************************************ 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:01.170 * Looking for test storage... 00:14:01.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:01.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.170 --rc genhtml_branch_coverage=1 00:14:01.170 --rc genhtml_function_coverage=1 00:14:01.170 --rc genhtml_legend=1 00:14:01.170 --rc geninfo_all_blocks=1 00:14:01.170 --rc geninfo_unexecuted_blocks=1 00:14:01.170 00:14:01.170 ' 00:14:01.170 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:01.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.170 --rc genhtml_branch_coverage=1 00:14:01.170 --rc genhtml_function_coverage=1 00:14:01.170 --rc genhtml_legend=1 00:14:01.170 --rc geninfo_all_blocks=1 00:14:01.171 --rc geninfo_unexecuted_blocks=1 00:14:01.171 00:14:01.171 ' 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:01.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.171 --rc genhtml_branch_coverage=1 00:14:01.171 --rc genhtml_function_coverage=1 00:14:01.171 --rc genhtml_legend=1 00:14:01.171 --rc geninfo_all_blocks=1 00:14:01.171 --rc geninfo_unexecuted_blocks=1 00:14:01.171 00:14:01.171 ' 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:01.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.171 --rc genhtml_branch_coverage=1 00:14:01.171 --rc genhtml_function_coverage=1 00:14:01.171 --rc genhtml_legend=1 00:14:01.171 --rc geninfo_all_blocks=1 00:14:01.171 --rc geninfo_unexecuted_blocks=1 00:14:01.171 00:14:01.171 ' 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:01.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1864789 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1864789' 00:14:01.171 Process pid: 1864789 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1864789 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1864789 ']' 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.171 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:01.171 [2024-12-09 17:24:27.555190] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:14:01.171 [2024-12-09 17:24:27.555235] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.171 [2024-12-09 17:24:27.628987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.171 [2024-12-09 17:24:27.670384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.171 [2024-12-09 17:24:27.670421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.171 [2024-12-09 17:24:27.670429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.171 [2024-12-09 17:24:27.670435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.171 [2024-12-09 17:24:27.670440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.171 [2024-12-09 17:24:27.671758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.171 [2024-12-09 17:24:27.671895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.171 [2024-12-09 17:24:27.672002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.171 [2024-12-09 17:24:27.672003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.430 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.430 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:01.430 17:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:02.366 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:02.625 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:02.625 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:02.625 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:02.625 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:02.625 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:02.883 Malloc1 00:14:02.883 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:02.883 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:03.141 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:03.400 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:03.400 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:03.400 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:03.658 Malloc2 00:14:03.658 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:03.658 17:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:03.916 17:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:04.177 17:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:04.177 17:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:04.177 17:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:04.177 17:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:04.177 17:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:04.177 17:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:04.177 [2024-12-09 17:24:30.609955] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:14:04.177 [2024-12-09 17:24:30.609994] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865381 ] 00:14:04.177 [2024-12-09 17:24:30.651529] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:04.177 [2024-12-09 17:24:30.659434] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:04.177 [2024-12-09 17:24:30.659458] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff083a4b000 00:14:04.177 [2024-12-09 17:24:30.660432] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.177 [2024-12-09 17:24:30.661436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.177 [2024-12-09 17:24:30.662443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.177 [2024-12-09 17:24:30.663448] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.177 [2024-12-09 17:24:30.664453] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.177 [2024-12-09 17:24:30.665458] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.177 [2024-12-09 17:24:30.666463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.177 [2024-12-09 17:24:30.667470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.177 [2024-12-09 17:24:30.668479] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:04.178 [2024-12-09 17:24:30.668489] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff083a40000 00:14:04.178 [2024-12-09 17:24:30.669544] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:04.178 [2024-12-09 17:24:30.685393] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:04.178 [2024-12-09 17:24:30.685422] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:04.178 [2024-12-09 17:24:30.690669] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:04.178 [2024-12-09 17:24:30.690705] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:04.178 [2024-12-09 17:24:30.690776] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:04.178 [2024-12-09 17:24:30.690793] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:04.178 [2024-12-09 17:24:30.690799] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:04.178 [2024-12-09 17:24:30.691659] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:04.178 [2024-12-09 17:24:30.691668] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:04.178 [2024-12-09 17:24:30.691674] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:04.178 [2024-12-09 17:24:30.692671] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:04.178 [2024-12-09 17:24:30.692680] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:04.178 [2024-12-09 17:24:30.692687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:04.178 [2024-12-09 17:24:30.693676] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:04.178 [2024-12-09 17:24:30.693684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:04.178 [2024-12-09 17:24:30.694680] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:04.178 [2024-12-09 17:24:30.694692] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:04.178 [2024-12-09 17:24:30.694697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:04.178 [2024-12-09 17:24:30.694704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:04.178 [2024-12-09 17:24:30.694812] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:04.178 [2024-12-09 17:24:30.694817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:04.178 [2024-12-09 17:24:30.694822] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:04.178 [2024-12-09 17:24:30.695688] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:04.178 [2024-12-09 17:24:30.696688] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:04.178 [2024-12-09 17:24:30.697694] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:04.178 [2024-12-09 17:24:30.698698] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:04.178 [2024-12-09 17:24:30.698799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:04.178 [2024-12-09 17:24:30.699711] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:04.178 [2024-12-09 17:24:30.699720] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:04.178 [2024-12-09 17:24:30.699727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.699744] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:04.178 [2024-12-09 17:24:30.699754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.699774] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.178 [2024-12-09 17:24:30.699779] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.178 [2024-12-09 17:24:30.699782] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.178 [2024-12-09 17:24:30.699796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.178 [2024-12-09 17:24:30.699840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:04.178 [2024-12-09 17:24:30.699849] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:04.178 [2024-12-09 17:24:30.699856] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:04.178 [2024-12-09 17:24:30.699860] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:04.178 [2024-12-09 17:24:30.699864] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:04.178 [2024-12-09 17:24:30.699869] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:04.178 [2024-12-09 17:24:30.699872] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:04.178 [2024-12-09 17:24:30.699877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.699883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.699892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:04.178 [2024-12-09 17:24:30.699907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:04.178 [2024-12-09 17:24:30.699918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.178 [2024-12-09 17:24:30.699926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.178 [2024-12-09 17:24:30.699933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.178 [2024-12-09 17:24:30.699940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.178 [2024-12-09 17:24:30.699944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.699952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.699960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:04.178 [2024-12-09 17:24:30.699972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:04.178 [2024-12-09 17:24:30.699978] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:04.178 [2024-12-09 17:24:30.699983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.699989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.699994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.700002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:04.178 [2024-12-09 17:24:30.700015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:04.178 [2024-12-09 17:24:30.700065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.700072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.700079] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:04.178 [2024-12-09 17:24:30.700083] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:04.178 [2024-12-09 17:24:30.700086] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.178 [2024-12-09 17:24:30.700091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:04.178 [2024-12-09 17:24:30.700106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:04.178 [2024-12-09 17:24:30.700114] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:04.178 [2024-12-09 17:24:30.700123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.700131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:04.178 [2024-12-09 17:24:30.700136] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.178 [2024-12-09 17:24:30.700140] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.178 [2024-12-09 17:24:30.700143] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.178 [2024-12-09 17:24:30.700148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.178 [2024-12-09 17:24:30.700171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:04.178 [2024-12-09 17:24:30.700182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:04.179 [2024-12-09 17:24:30.700189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:04.179 [2024-12-09 17:24:30.700195] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.179 [2024-12-09 17:24:30.700198] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.179 [2024-12-09 17:24:30.700201] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.179 [2024-12-09 17:24:30.700208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.179 [2024-12-09 17:24:30.700219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:04.179 [2024-12-09 17:24:30.700227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:04.179 [2024-12-09 17:24:30.700233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:04.179 [2024-12-09 17:24:30.700240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:04.179 [2024-12-09 17:24:30.700247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:04.179 [2024-12-09 17:24:30.700252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:04.179 [2024-12-09 17:24:30.700257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:04.179 [2024-12-09 17:24:30.700261] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:04.179 [2024-12-09 17:24:30.700265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:04.179 [2024-12-09 17:24:30.700270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:04.179 [2024-12-09 17:24:30.700287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:04.179 [2024-12-09 17:24:30.700295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:04.179 [2024-12-09 17:24:30.700306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:04.179 [2024-12-09 17:24:30.700314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:04.179 [2024-12-09 17:24:30.700323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:04.179 [2024-12-09 17:24:30.700335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:04.179 [2024-12-09 17:24:30.700345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:04.179 [2024-12-09 17:24:30.700355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:04.179 [2024-12-09 17:24:30.700366] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:04.179 [2024-12-09 17:24:30.700370] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:04.179 [2024-12-09 17:24:30.700373] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:04.179 [2024-12-09 17:24:30.700377] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:04.179 [2024-12-09 17:24:30.700380] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:04.179 [2024-12-09 17:24:30.700386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:04.179 [2024-12-09 17:24:30.700392] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:04.179 [2024-12-09 17:24:30.700399] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:04.179 [2024-12-09 17:24:30.700402] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.179 [2024-12-09 17:24:30.700407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:04.179 [2024-12-09 17:24:30.700414] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:04.179 [2024-12-09 17:24:30.700417] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.179 [2024-12-09 17:24:30.700420] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.179 [2024-12-09 17:24:30.700425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.179 [2024-12-09 17:24:30.700432] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:04.179 [2024-12-09 17:24:30.700435] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:04.179 [2024-12-09 17:24:30.700439] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.179 [2024-12-09 17:24:30.700444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:04.179 [2024-12-09 17:24:30.700449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:04.179 [2024-12-09 17:24:30.700460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:04.179 [2024-12-09 17:24:30.700469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:04.179 [2024-12-09 17:24:30.700476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:04.179 ===================================================== 00:14:04.179 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:04.179 ===================================================== 00:14:04.179 Controller Capabilities/Features 00:14:04.179 ================================ 00:14:04.179 Vendor ID: 4e58 00:14:04.179 Subsystem Vendor ID: 4e58 00:14:04.179 Serial Number: SPDK1 00:14:04.179 Model Number: SPDK bdev Controller 00:14:04.179 Firmware Version: 25.01 00:14:04.179 Recommended Arb Burst: 6 00:14:04.179 IEEE OUI Identifier: 8d 6b 50 00:14:04.179 Multi-path I/O 00:14:04.179 May have multiple subsystem ports: Yes 00:14:04.179 May have multiple controllers: Yes 00:14:04.179 Associated with SR-IOV VF: No 00:14:04.179 Max Data Transfer Size: 131072 00:14:04.179 Max Number of Namespaces: 32 00:14:04.179 Max Number of I/O Queues: 127 00:14:04.179 NVMe Specification Version (VS): 1.3 00:14:04.179 NVMe Specification Version (Identify): 1.3 00:14:04.179 Maximum Queue Entries: 256 00:14:04.179 Contiguous Queues Required: Yes 00:14:04.179 Arbitration Mechanisms Supported 00:14:04.179 Weighted Round Robin: Not Supported 00:14:04.179 Vendor Specific: Not Supported 00:14:04.179 Reset Timeout: 15000 ms 00:14:04.179 Doorbell Stride: 4 bytes 00:14:04.179 NVM Subsystem Reset: Not Supported 00:14:04.179 Command Sets Supported 00:14:04.179 NVM Command Set: Supported 00:14:04.179 Boot Partition: Not Supported 00:14:04.179 Memory Page Size Minimum: 4096 bytes 00:14:04.179 Memory Page Size Maximum: 4096 bytes 00:14:04.179 Persistent Memory Region: Not Supported 00:14:04.179 Optional Asynchronous Events Supported 00:14:04.179 Namespace Attribute Notices: Supported 00:14:04.179 Firmware Activation Notices: Not Supported 00:14:04.179 ANA Change Notices: Not Supported 00:14:04.179 PLE Aggregate Log Change Notices: Not Supported 00:14:04.179 LBA Status Info Alert Notices: Not Supported 00:14:04.179 EGE Aggregate Log Change Notices: Not Supported 00:14:04.179 Normal NVM Subsystem Shutdown event: Not Supported 00:14:04.179 Zone Descriptor Change Notices: Not Supported 00:14:04.179 Discovery Log Change Notices: Not Supported 00:14:04.179 Controller Attributes 00:14:04.179 128-bit Host Identifier: Supported 00:14:04.179 Non-Operational Permissive Mode: Not Supported 00:14:04.179 NVM Sets: Not Supported 00:14:04.179 Read Recovery Levels: Not Supported 00:14:04.179 Endurance Groups: Not Supported 00:14:04.179 Predictable Latency Mode: Not Supported 00:14:04.179 Traffic Based Keep ALive: Not Supported 00:14:04.179 Namespace Granularity: Not Supported 00:14:04.179 SQ Associations: Not Supported 00:14:04.179 UUID List: Not Supported 00:14:04.179 Multi-Domain Subsystem: Not Supported 00:14:04.179 Fixed Capacity Management: Not Supported 00:14:04.179 Variable Capacity Management: Not Supported 00:14:04.179 Delete Endurance Group: Not Supported 00:14:04.179 Delete NVM Set: Not Supported 00:14:04.179 Extended LBA Formats Supported: Not Supported 00:14:04.179 Flexible Data Placement Supported: Not Supported 00:14:04.179 00:14:04.179 Controller Memory Buffer Support 00:14:04.179 ================================ 00:14:04.179 Supported: No 00:14:04.179 00:14:04.179 Persistent Memory Region Support 00:14:04.179 ================================ 00:14:04.179 Supported: No 00:14:04.179 00:14:04.179 Admin Command Set Attributes 00:14:04.179 ============================ 00:14:04.179 Security Send/Receive: Not Supported 00:14:04.179 Format NVM: Not Supported 00:14:04.179 Firmware Activate/Download: Not Supported 00:14:04.179 Namespace Management: Not Supported 00:14:04.179 Device Self-Test: Not Supported 00:14:04.179 Directives: Not Supported 00:14:04.179 NVMe-MI: Not Supported 00:14:04.179 Virtualization Management: Not Supported 00:14:04.179 Doorbell Buffer Config: Not Supported 00:14:04.179 Get LBA Status Capability: Not Supported 00:14:04.179 Command & Feature Lockdown Capability: Not Supported 00:14:04.179 Abort Command Limit: 4 00:14:04.179 Async Event Request Limit: 4 00:14:04.179 Number of Firmware Slots: N/A 00:14:04.179 Firmware Slot 1 Read-Only: N/A 00:14:04.179 Firmware Activation Without Reset: N/A 00:14:04.179 Multiple Update Detection Support: N/A 00:14:04.179 Firmware Update Granularity: No Information Provided 00:14:04.179 Per-Namespace SMART Log: No 00:14:04.180 Asymmetric Namespace Access Log Page: Not Supported 00:14:04.180 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:04.180 Command Effects Log Page: Supported 00:14:04.180 Get Log Page Extended Data: Supported 00:14:04.180 Telemetry Log Pages: Not Supported 00:14:04.180 Persistent Event Log Pages: Not Supported 00:14:04.180 Supported Log Pages Log Page: May Support 00:14:04.180 Commands Supported & Effects Log Page: Not Supported 00:14:04.180 Feature Identifiers & Effects Log Page:May Support 00:14:04.180 NVMe-MI Commands & Effects Log Page: May Support 00:14:04.180 Data Area 4 for Telemetry Log: Not Supported 00:14:04.180 Error Log Page Entries Supported: 128 00:14:04.180 Keep Alive: Supported 00:14:04.180 Keep Alive Granularity: 10000 ms 00:14:04.180 00:14:04.180 NVM Command Set Attributes 00:14:04.180 ========================== 00:14:04.180 Submission Queue Entry Size 00:14:04.180 Max: 64 00:14:04.180 Min: 64 00:14:04.180 Completion Queue Entry Size 00:14:04.180 Max: 16 00:14:04.180 Min: 16 00:14:04.180 Number of Namespaces: 32 00:14:04.180 Compare Command: Supported 00:14:04.180 Write Uncorrectable Command: Not Supported 00:14:04.180 Dataset Management Command: Supported 00:14:04.180 Write Zeroes Command: Supported 00:14:04.180 Set Features Save Field: Not Supported 00:14:04.180 Reservations: Not Supported 00:14:04.180 Timestamp: Not Supported 00:14:04.180 Copy: Supported 00:14:04.180 Volatile Write Cache: Present 00:14:04.180 Atomic Write Unit (Normal): 1 00:14:04.180 Atomic Write Unit (PFail): 1 00:14:04.180 Atomic Compare & Write Unit: 1 00:14:04.180 Fused Compare & Write: Supported 00:14:04.180 Scatter-Gather List 00:14:04.180 SGL Command Set: Supported (Dword aligned) 00:14:04.180 SGL Keyed: Not Supported 00:14:04.180 SGL Bit Bucket Descriptor: Not Supported 00:14:04.180 SGL Metadata Pointer: Not Supported 00:14:04.180 Oversized SGL: Not Supported 00:14:04.180 SGL Metadata Address: Not Supported 00:14:04.180 SGL Offset: Not Supported 00:14:04.180 Transport SGL Data Block: Not Supported 00:14:04.180 Replay Protected Memory Block: Not Supported 00:14:04.180 00:14:04.180 Firmware Slot Information 00:14:04.180 ========================= 00:14:04.180 Active slot: 1 00:14:04.180 Slot 1 Firmware Revision: 25.01 00:14:04.180 00:14:04.180 00:14:04.180 Commands Supported and Effects 00:14:04.180 ============================== 00:14:04.180 Admin Commands 00:14:04.180 -------------- 00:14:04.180 Get Log Page (02h): Supported 00:14:04.180 Identify (06h): Supported 00:14:04.180 Abort (08h): Supported 00:14:04.180 Set Features (09h): Supported 00:14:04.180 Get Features (0Ah): Supported 00:14:04.180 Asynchronous Event Request (0Ch): Supported 00:14:04.180 Keep Alive (18h): Supported 00:14:04.180 I/O Commands 00:14:04.180 ------------ 00:14:04.180 Flush (00h): Supported LBA-Change 00:14:04.180 Write (01h): Supported LBA-Change 00:14:04.180 Read (02h): Supported 00:14:04.180 Compare (05h): Supported 00:14:04.180 Write Zeroes (08h): Supported LBA-Change 00:14:04.180 Dataset Management (09h): Supported LBA-Change 00:14:04.180 Copy (19h): Supported LBA-Change 00:14:04.180 00:14:04.180 Error Log 00:14:04.180 ========= 00:14:04.180 00:14:04.180 Arbitration 00:14:04.180 =========== 00:14:04.180 Arbitration Burst: 1 00:14:04.180 00:14:04.180 Power Management 00:14:04.180 ================ 00:14:04.180 Number of Power States: 1 00:14:04.180 Current Power State: Power State #0 00:14:04.180 Power State #0: 00:14:04.180 Max Power: 0.00 W 00:14:04.180 Non-Operational State: Operational 00:14:04.180 Entry Latency: Not Reported 00:14:04.180 Exit Latency: Not Reported 00:14:04.180 Relative Read Throughput: 0 00:14:04.180 Relative Read Latency: 0 00:14:04.180 Relative Write Throughput: 0 00:14:04.180 Relative Write Latency: 0 00:14:04.180 Idle Power: Not Reported 00:14:04.180 Active Power: Not Reported 00:14:04.180 Non-Operational Permissive Mode: Not Supported 00:14:04.180 00:14:04.180 Health Information 00:14:04.180 ================== 00:14:04.180 Critical Warnings: 00:14:04.180 Available Spare Space: OK 00:14:04.180 Temperature: OK 00:14:04.180 Device Reliability: OK 00:14:04.180 Read Only: No 00:14:04.180 Volatile Memory Backup: OK 00:14:04.180 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:04.180 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:04.180 Available Spare: 0% 00:14:04.180 Available Sp[2024-12-09 17:24:30.700560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:04.180 [2024-12-09 17:24:30.700567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:04.180 [2024-12-09 17:24:30.700594] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:04.180 [2024-12-09 17:24:30.700603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.180 [2024-12-09 17:24:30.700609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.180 [2024-12-09 17:24:30.700614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.180 [2024-12-09 17:24:30.700619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.180 [2024-12-09 17:24:30.700716] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:04.180 [2024-12-09 17:24:30.700724] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:04.180 [2024-12-09 17:24:30.701730] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:04.180 [2024-12-09 17:24:30.701782] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:04.180 [2024-12-09 17:24:30.701789] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:04.180 [2024-12-09 17:24:30.702728] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:04.180 [2024-12-09 17:24:30.702739] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:04.180 [2024-12-09 17:24:30.702790] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:04.180 [2024-12-09 17:24:30.703761] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:04.439 are Threshold: 0% 00:14:04.439 Life Percentage Used: 0% 00:14:04.439 Data Units Read: 0 00:14:04.439 Data Units Written: 0 00:14:04.439 Host Read Commands: 0 00:14:04.439 Host Write Commands: 0 00:14:04.439 Controller Busy Time: 0 minutes 00:14:04.439 Power Cycles: 0 00:14:04.439 Power On Hours: 0 hours 00:14:04.439 Unsafe Shutdowns: 0 00:14:04.439 Unrecoverable Media Errors: 0 00:14:04.439 Lifetime Error Log Entries: 0 00:14:04.439 Warning Temperature Time: 0 minutes 00:14:04.439 Critical Temperature Time: 0 minutes 00:14:04.439 00:14:04.439 Number of Queues 00:14:04.439 ================ 00:14:04.439 Number of I/O Submission Queues: 127 00:14:04.439 Number of I/O Completion Queues: 127 00:14:04.439 00:14:04.439 Active Namespaces 00:14:04.439 ================= 00:14:04.439 Namespace ID:1 00:14:04.439 Error Recovery Timeout: Unlimited 00:14:04.439 Command Set Identifier: NVM (00h) 00:14:04.439 Deallocate: Supported 00:14:04.439 Deallocated/Unwritten Error: Not Supported 00:14:04.439 Deallocated Read Value: Unknown 00:14:04.439 Deallocate in Write Zeroes: Not Supported 00:14:04.439 Deallocated Guard Field: 0xFFFF 00:14:04.439 Flush: Supported 00:14:04.439 Reservation: Supported 00:14:04.439 Namespace Sharing Capabilities: Multiple Controllers 00:14:04.439 Size (in LBAs): 131072 (0GiB) 00:14:04.439 Capacity (in LBAs): 131072 (0GiB) 00:14:04.439 Utilization (in LBAs): 131072 (0GiB) 00:14:04.439 NGUID: 09083B8B540F400C9C9E3A46EF6C2271 00:14:04.439 UUID: 09083b8b-540f-400c-9c9e-3a46ef6c2271 00:14:04.439 Thin Provisioning: Not Supported 00:14:04.439 Per-NS Atomic Units: Yes 00:14:04.439 Atomic Boundary Size (Normal): 0 00:14:04.439 Atomic Boundary Size (PFail): 0 00:14:04.439 Atomic Boundary Offset: 0 00:14:04.439 Maximum Single Source Range Length: 65535 00:14:04.439 Maximum Copy Length: 65535 00:14:04.439 Maximum Source Range Count: 1 00:14:04.439 NGUID/EUI64 Never Reused: No 00:14:04.439 Namespace Write Protected: No 00:14:04.439 Number of LBA Formats: 1 00:14:04.439 Current LBA Format: LBA Format #00 00:14:04.439 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:04.439 00:14:04.439 17:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:04.439 [2024-12-09 17:24:30.929982] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:09.705 Initializing NVMe Controllers 00:14:09.705 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:09.705 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:09.706 Initialization complete. Launching workers. 00:14:09.706 ======================================================== 00:14:09.706 Latency(us) 00:14:09.706 Device Information : IOPS MiB/s Average min max 00:14:09.706 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39979.17 156.17 3201.88 956.48 6655.04 00:14:09.706 ======================================================== 00:14:09.706 Total : 39979.17 156.17 3201.88 956.48 6655.04 00:14:09.706 00:14:09.706 [2024-12-09 17:24:35.951240] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:09.706 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:09.706 [2024-12-09 17:24:36.184299] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:14.972 Initializing NVMe Controllers 00:14:14.972 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:14.972 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:14.972 Initialization complete. Launching workers. 00:14:14.972 ======================================================== 00:14:14.972 Latency(us) 00:14:14.972 Device Information : IOPS MiB/s Average min max 00:14:14.972 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16014.45 62.56 7998.17 4052.83 15962.89 00:14:14.972 ======================================================== 00:14:14.972 Total : 16014.45 62.56 7998.17 4052.83 15962.89 00:14:14.972 00:14:14.972 [2024-12-09 17:24:41.224910] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:14.972 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:14.972 [2024-12-09 17:24:41.437949] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:20.242 [2024-12-09 17:24:46.513461] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:20.242 Initializing NVMe Controllers 00:14:20.242 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:20.242 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:20.242 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:20.242 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:20.242 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:20.242 Initialization complete. Launching workers. 00:14:20.242 Starting thread on core 2 00:14:20.242 Starting thread on core 3 00:14:20.242 Starting thread on core 1 00:14:20.242 17:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:20.501 [2024-12-09 17:24:46.804333] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:23.790 [2024-12-09 17:24:49.873519] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:23.790 Initializing NVMe Controllers 00:14:23.790 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:23.790 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:23.790 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:23.790 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:23.790 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:23.790 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:23.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:23.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:23.790 Initialization complete. Launching workers. 00:14:23.790 Starting thread on core 1 with urgent priority queue 00:14:23.790 Starting thread on core 2 with urgent priority queue 00:14:23.790 Starting thread on core 3 with urgent priority queue 00:14:23.790 Starting thread on core 0 with urgent priority queue 00:14:23.790 SPDK bdev Controller (SPDK1 ) core 0: 7311.67 IO/s 13.68 secs/100000 ios 00:14:23.790 SPDK bdev Controller (SPDK1 ) core 1: 7841.33 IO/s 12.75 secs/100000 ios 00:14:23.790 SPDK bdev Controller (SPDK1 ) core 2: 8101.67 IO/s 12.34 secs/100000 ios 00:14:23.790 SPDK bdev Controller (SPDK1 ) core 3: 8383.00 IO/s 11.93 secs/100000 ios 00:14:23.790 ======================================================== 00:14:23.790 00:14:23.790 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:23.790 [2024-12-09 17:24:50.151579] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:23.790 Initializing NVMe Controllers 00:14:23.790 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:23.790 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:23.790 Namespace ID: 1 size: 0GB 00:14:23.790 Initialization complete. 00:14:23.790 INFO: using host memory buffer for IO 00:14:23.790 Hello world! 00:14:23.790 [2024-12-09 17:24:50.187887] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:23.790 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:24.049 [2024-12-09 17:24:50.477565] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.985 Initializing NVMe Controllers 00:14:24.985 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.985 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.985 Initialization complete. Launching workers. 00:14:24.985 submit (in ns) avg, min, max = 7518.4, 3147.6, 4000516.2 00:14:24.985 complete (in ns) avg, min, max = 19862.5, 1730.5, 4181905.7 00:14:24.985 00:14:24.985 Submit histogram 00:14:24.985 ================ 00:14:24.985 Range in us Cumulative Count 00:14:24.985 3.139 - 3.154: 0.0122% ( 2) 00:14:24.985 3.154 - 3.170: 0.0305% ( 3) 00:14:24.985 3.170 - 3.185: 0.0427% ( 2) 00:14:24.985 3.185 - 3.200: 0.1463% ( 17) 00:14:24.985 3.200 - 3.215: 0.9509% ( 132) 00:14:24.985 3.215 - 3.230: 4.1451% ( 524) 00:14:24.985 3.230 - 3.246: 9.7592% ( 921) 00:14:24.985 3.246 - 3.261: 15.7330% ( 980) 00:14:24.985 3.261 - 3.276: 22.9137% ( 1178) 00:14:24.985 3.276 - 3.291: 30.3017% ( 1212) 00:14:24.985 3.291 - 3.307: 36.0073% ( 936) 00:14:24.985 3.307 - 3.322: 40.9814% ( 816) 00:14:24.985 3.322 - 3.337: 45.6202% ( 761) 00:14:24.985 3.337 - 3.352: 50.3566% ( 777) 00:14:24.985 3.352 - 3.368: 54.0079% ( 599) 00:14:24.985 3.368 - 3.383: 60.4998% ( 1065) 00:14:24.985 3.383 - 3.398: 66.8820% ( 1047) 00:14:24.985 3.398 - 3.413: 72.1548% ( 865) 00:14:24.985 3.413 - 3.429: 78.4090% ( 1026) 00:14:24.986 3.429 - 3.444: 82.5846% ( 685) 00:14:24.986 3.444 - 3.459: 85.1021% ( 413) 00:14:24.986 3.459 - 3.474: 86.4127% ( 215) 00:14:24.986 3.474 - 3.490: 87.0832% ( 110) 00:14:24.986 3.490 - 3.505: 87.5587% ( 78) 00:14:24.986 3.505 - 3.520: 88.0341% ( 78) 00:14:24.986 3.520 - 3.535: 88.7656% ( 120) 00:14:24.986 3.535 - 3.550: 89.5154% ( 123) 00:14:24.986 3.550 - 3.566: 90.5273% ( 166) 00:14:24.986 3.566 - 3.581: 91.5635% ( 170) 00:14:24.986 3.581 - 3.596: 92.4718% ( 149) 00:14:24.986 3.596 - 3.611: 93.3130% ( 138) 00:14:24.986 3.611 - 3.627: 94.1055% ( 130) 00:14:24.986 3.627 - 3.642: 95.0442% ( 154) 00:14:24.986 3.642 - 3.657: 95.9281% ( 145) 00:14:24.986 3.657 - 3.672: 96.9217% ( 163) 00:14:24.986 3.672 - 3.688: 97.5373% ( 101) 00:14:24.986 3.688 - 3.703: 98.0189% ( 79) 00:14:24.986 3.703 - 3.718: 98.4090% ( 64) 00:14:24.986 3.718 - 3.733: 98.7138% ( 50) 00:14:24.986 3.733 - 3.749: 98.9942% ( 46) 00:14:24.986 3.749 - 3.764: 99.2198% ( 37) 00:14:24.986 3.764 - 3.779: 99.3843% ( 27) 00:14:24.986 3.779 - 3.794: 99.4941% ( 18) 00:14:24.986 3.794 - 3.810: 99.5672% ( 12) 00:14:24.986 3.810 - 3.825: 99.5916% ( 4) 00:14:24.986 3.825 - 3.840: 99.6099% ( 3) 00:14:24.986 3.840 - 3.855: 99.6160% ( 1) 00:14:24.986 3.855 - 3.870: 99.6282% ( 2) 00:14:24.986 3.962 - 3.992: 99.6404% ( 2) 00:14:24.986 3.992 - 4.023: 99.6464% ( 1) 00:14:24.986 5.333 - 5.364: 99.6525% ( 1) 00:14:24.986 5.425 - 5.455: 99.6586% ( 1) 00:14:24.986 5.516 - 5.547: 99.6647% ( 1) 00:14:24.986 5.608 - 5.638: 99.6708% ( 1) 00:14:24.986 5.669 - 5.699: 99.6830% ( 2) 00:14:24.986 5.699 - 5.730: 99.6891% ( 1) 00:14:24.986 5.730 - 5.760: 99.6952% ( 1) 00:14:24.986 5.790 - 5.821: 99.7074% ( 2) 00:14:24.986 5.882 - 5.912: 99.7135% ( 1) 00:14:24.986 5.912 - 5.943: 99.7196% ( 1) 00:14:24.986 6.126 - 6.156: 99.7257% ( 1) 00:14:24.986 6.156 - 6.187: 99.7379% ( 2) 00:14:24.986 6.583 - 6.613: 99.7440% ( 1) 00:14:24.986 6.705 - 6.735: 99.7501% ( 1) 00:14:24.986 6.796 - 6.827: 99.7562% ( 1) 00:14:24.986 6.857 - 6.888: 99.7623% ( 1) 00:14:24.986 6.979 - 7.010: 99.7684% ( 1) 00:14:24.986 7.131 - 7.162: 99.7745% ( 1) 00:14:24.986 7.192 - 7.223: 99.7806% ( 1) 00:14:24.986 7.223 - 7.253: 99.7927% ( 2) 00:14:24.986 7.314 - 7.345: 99.7988% ( 1) 00:14:24.986 7.467 - 7.497: 99.8110% ( 2) 00:14:24.986 7.497 - 7.528: 99.8171% ( 1) 00:14:24.986 7.528 - 7.558: 99.8293% ( 2) 00:14:24.986 7.650 - 7.680: 99.8354% ( 1) 00:14:24.986 7.771 - 7.802: 99.8415% ( 1) 00:14:24.986 7.802 - 7.863: 99.8476% ( 1) 00:14:24.986 7.924 - 7.985: 99.8537% ( 1) 00:14:24.986 7.985 - 8.046: 99.8659% ( 2) 00:14:24.986 8.229 - 8.290: 99.8720% ( 1) 00:14:24.986 9.082 - 9.143: 99.8781% ( 1) 00:14:24.986 9.448 - 9.509: 99.8842% ( 1) 00:14:24.986 9.874 - 9.935: 99.8903% ( 1) 00:14:24.986 14.385 - 14.446: 99.8964% ( 1) 00:14:24.986 3994.575 - 4025.783: 100.0000% ( 17) 00:14:24.986 00:14:24.986 Complete histogram 00:14:24.986 ================== 00:14:24.986 Range in us Cumulative Count 00:14:24.986 1.730 - 1.737: 0.0183% ( 3) 00:14:24.986 1.737 - 1.745: 0.0366% ( 3) 00:14:24.986 1.752 - 1.760: 0.0427% ( 1) 00:14:24.986 1.760 - 1.768: 0.1646% ( 20) 00:14:24.986 1.768 - 1.775: 1.1399% ( 160) 00:14:24.986 1.775 - 1.783: 4.1390% ( 492) 00:14:24.986 1.783 - 1.790: 7.1990% ( 502) 00:14:24.986 1.790 - 1.798: 8.6010% ( 230) 00:14:24.986 1.798 - 1.806: 9.3935% ( 130) 00:14:24.986 1.806 - 1.813: 11.9902% ( 426) 00:14:24.986 1.813 - 1.821: 28.0158% ( 2629) 00:14:24.986 1.821 - 1.829: 57.7751% ( 4882) 00:14:24.986 1.829 - 1.836: 76.6230% ( 3092) 00:14:24.986 1.836 - 1.844: 85.6202% ( 1476) 00:14:24.986 1.844 - 1.851: 91.0881% ( 897) 00:14:24.986 1.851 - 1.859: 93.7519% ( 437) 00:14:24.986 1.859 - 1.867: 94.8126% ( 174) 00:14:24.986 1.867 - 1.874: 95.3246% ( 84) 00:14:24.986 1.874 - 1.882: 95.6416% ( 52) 00:14:24.986 1.882 - 1.890: 96.2390% ( 98) 00:14:24.986 1.890 - 1.897: 96.9582% ( 118) 00:14:24.986 1.897 - 1.905: 97.7385% ( 128) 00:14:24.986 1.905 - 1.912: 98.3420% ( 99) 00:14:24.986 1.912 - 1.920: 98.6955% ( 58) 00:14:24.986 1.920 - 1.928: 98.8906% ( 32) 00:14:24.986 1.928 - 1.935: 98.9942% ( 17) 00:14:24.986 1.935 - 1.943: 99.0369% ( 7) 00:14:24.986 1.943 - 1.950: 99.0613% ( 4) 00:14:24.986 1.950 - 1.966: 99.0674% ( 1) 00:14:24.986 1.966 - 1.981: 99.0735% ( 1) 00:14:24.986 1.981 - 1.996: 99.0795% ( 1) 00:14:24.986 1.996 - 2.011: 99.0856% ( 1) 00:14:24.986 2.011 - 2.027: 99.1161% ( 5) 00:14:24.986 2.027 - 2.042: 99.1283% ( 2) 00:14:24.986 2.042 - 2.057: 99.1344% ( 1) 00:14:24.986 2.057 - 2.072: 99.1710% ( 6) 00:14:24.986 2.072 - 2.088: 99.2624% ( 15) 00:14:24.986 2.088 - 2.103: 99.2746% ( 2) 00:14:24.986 2.103 - 2.118: 99.2868% ( 2) 00:14:24.986 2.164 - 2.179: 99.2929% ( 1) 00:14:24.986 2.194 - 2.210: 99.2990% ( 1) 00:14:24.986 2.255 - 2.270: 99.3051% ( 1) 00:14:24.986 2.270 - 2.286: 99.3112% ( 1) 00:14:24.986 2.316 - 2.331: 99.3173% ( 1) 00:14:24.986 2.408 - 2.423: 99.3234% ( 1) 00:14:24.986 3.764 - 3.779: 99.3295% ( 1) 00:14:24.986 3.794 - 3.810: 99.3356% ( 1) 00:14:24.986 3.962 - 3.992: 99.3417% ( 1) 00:14:24.986 4.023 - 4.053: 99.3478% ( 1) 00:14:24.986 4.358 - 4.389: 99.3539% ( 1) 00:14:24.986 4.480 - 4.510: 99.3600% ( 1) 00:14:24.986 4.510 - 4.541: 99.3660% ( 1) 00:14:24.986 4.571 - 4.602: 99.3721% ( 1) 00:14:24.986 4.876 - 4.907: 99.3782% ( 1) 00:14:24.986 4.998 - 5.029: 99.3843% ( 1) 00:14:24.986 5.029 - 5.059: 99.3904% ( 1) 00:14:24.986 5.150 - 5.181: 99.3965% ( 1) 00:14:24.986 5.242 - 5.272: 99.4026% ( 1) 00:14:24.986 5.272 - 5.303: 99.4087% ( 1) 00:14:24.986 5.455 - 5.486: 99.4148% ( 1) 00:14:24.986 5.486 - 5.516: 99.4209% ( 1) 00:14:24.986 5.577 - 5.608: 99.4270% ( 1) 00:14:24.986 5.669 - 5.699: 99.4331% ( 1) 00:14:24.986 5.851 - 5.882: 99.4392% ( 1) 00:14:24.986 6.004 - 6.034: 99.4453% ( 1) 00:14:24.986 6.065 - 6.095: 99.4514% ( 1) 00:14:24.986 6.095 - 6.126: 99.4575% ( 1) 00:14:24.986 6.156 - 6.187: 99.4636% ( 1) 00:14:24.986 6.248 - 6.278: 99.4758% ( 2) 00:14:24.986 6.461 - 6.491: 99.4819% ( 1) 00:14:24.986 6.552 - 6.583: 99.4880% ( 1) 00:14:24.986 6.613 - 6.644: 99.4941% ( 1) 00:14:24.986 6.644 - 6.674: 99.5002% ( 1) 00:14:24.986 6.735 - 6.766: 99.5123% ( 2) 00:14:24.986 6.918 - 6.949: 99.5184% ( 1) 00:14:24.986 7.436 - 7.467: 99.5245% ( 1) 00:14:24.986 7.650 - 7.680: 99.5306% ( 1) 00:14:24.986 11.581 - 11.642: 99.5367% ( 1) 00:14:24.986 13.592 - 13.653: 99.5428% ( 1) 00:14:24.986 38.522 - 38.7[2024-12-09 17:24:51.499382] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:25.245 66: 99.5489% ( 1) 00:14:25.245 3854.141 - 3869.745: 99.5550% ( 1) 00:14:25.245 3994.575 - 4025.783: 99.9939% ( 72) 00:14:25.245 4181.821 - 4213.029: 100.0000% ( 1) 00:14:25.245 00:14:25.245 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:25.245 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:25.245 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:25.245 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:25.245 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:25.245 [ 00:14:25.245 { 00:14:25.245 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:25.245 "subtype": "Discovery", 00:14:25.245 "listen_addresses": [], 00:14:25.245 "allow_any_host": true, 00:14:25.245 "hosts": [] 00:14:25.245 }, 00:14:25.245 { 00:14:25.245 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:25.245 "subtype": "NVMe", 00:14:25.245 "listen_addresses": [ 00:14:25.245 { 00:14:25.245 "trtype": "VFIOUSER", 00:14:25.245 "adrfam": "IPv4", 00:14:25.245 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:25.245 "trsvcid": "0" 00:14:25.245 } 00:14:25.245 ], 00:14:25.245 "allow_any_host": true, 00:14:25.245 "hosts": [], 00:14:25.245 "serial_number": "SPDK1", 00:14:25.245 "model_number": "SPDK bdev Controller", 00:14:25.245 "max_namespaces": 32, 00:14:25.245 "min_cntlid": 1, 00:14:25.245 "max_cntlid": 65519, 00:14:25.245 "namespaces": [ 00:14:25.245 { 00:14:25.245 "nsid": 1, 00:14:25.245 "bdev_name": "Malloc1", 00:14:25.245 "name": "Malloc1", 00:14:25.245 "nguid": "09083B8B540F400C9C9E3A46EF6C2271", 00:14:25.245 "uuid": "09083b8b-540f-400c-9c9e-3a46ef6c2271" 00:14:25.245 } 00:14:25.245 ] 00:14:25.245 }, 00:14:25.245 { 00:14:25.245 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:25.245 "subtype": "NVMe", 00:14:25.245 "listen_addresses": [ 00:14:25.245 { 00:14:25.245 "trtype": "VFIOUSER", 00:14:25.245 "adrfam": "IPv4", 00:14:25.245 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:25.245 "trsvcid": "0" 00:14:25.245 } 00:14:25.245 ], 00:14:25.245 "allow_any_host": true, 00:14:25.245 "hosts": [], 00:14:25.245 "serial_number": "SPDK2", 00:14:25.245 "model_number": "SPDK bdev Controller", 00:14:25.245 "max_namespaces": 32, 00:14:25.245 "min_cntlid": 1, 00:14:25.245 "max_cntlid": 65519, 00:14:25.245 "namespaces": [ 00:14:25.245 { 00:14:25.245 "nsid": 1, 00:14:25.245 "bdev_name": "Malloc2", 00:14:25.245 "name": "Malloc2", 00:14:25.245 "nguid": "7CDCD264659A4BA58C613EBA86B3AA3C", 00:14:25.245 "uuid": "7cdcd264-659a-4ba5-8c61-3eba86b3aa3c" 00:14:25.245 } 00:14:25.245 ] 00:14:25.245 } 00:14:25.245 ] 00:14:25.246 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:25.246 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:25.246 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1868827 00:14:25.246 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:25.246 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:25.246 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:25.246 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:25.246 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:25.246 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:25.246 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:25.504 [2024-12-09 17:24:51.881270] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:25.504 Malloc3 00:14:25.504 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:25.763 [2024-12-09 17:24:52.123044] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:25.763 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:25.763 Asynchronous Event Request test 00:14:25.763 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:25.763 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:25.763 Registering asynchronous event callbacks... 00:14:25.763 Starting namespace attribute notice tests for all controllers... 00:14:25.763 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:25.763 aer_cb - Changed Namespace 00:14:25.763 Cleaning up... 00:14:26.024 [ 00:14:26.024 { 00:14:26.024 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:26.024 "subtype": "Discovery", 00:14:26.024 "listen_addresses": [], 00:14:26.024 "allow_any_host": true, 00:14:26.024 "hosts": [] 00:14:26.024 }, 00:14:26.024 { 00:14:26.024 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:26.024 "subtype": "NVMe", 00:14:26.024 "listen_addresses": [ 00:14:26.024 { 00:14:26.024 "trtype": "VFIOUSER", 00:14:26.024 "adrfam": "IPv4", 00:14:26.024 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:26.024 "trsvcid": "0" 00:14:26.024 } 00:14:26.024 ], 00:14:26.024 "allow_any_host": true, 00:14:26.024 "hosts": [], 00:14:26.024 "serial_number": "SPDK1", 00:14:26.024 "model_number": "SPDK bdev Controller", 00:14:26.024 "max_namespaces": 32, 00:14:26.024 "min_cntlid": 1, 00:14:26.024 "max_cntlid": 65519, 00:14:26.024 "namespaces": [ 00:14:26.024 { 00:14:26.024 "nsid": 1, 00:14:26.024 "bdev_name": "Malloc1", 00:14:26.024 "name": "Malloc1", 00:14:26.024 "nguid": "09083B8B540F400C9C9E3A46EF6C2271", 00:14:26.024 "uuid": "09083b8b-540f-400c-9c9e-3a46ef6c2271" 00:14:26.024 }, 00:14:26.024 { 00:14:26.024 "nsid": 2, 00:14:26.024 "bdev_name": "Malloc3", 00:14:26.024 "name": "Malloc3", 00:14:26.024 "nguid": "A0C4B8A7BF304D22A99D1CDA7E0E2E03", 00:14:26.024 "uuid": "a0c4b8a7-bf30-4d22-a99d-1cda7e0e2e03" 00:14:26.024 } 00:14:26.024 ] 00:14:26.024 }, 00:14:26.024 { 00:14:26.024 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:26.024 "subtype": "NVMe", 00:14:26.024 "listen_addresses": [ 00:14:26.024 { 00:14:26.024 "trtype": "VFIOUSER", 00:14:26.024 "adrfam": "IPv4", 00:14:26.024 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:26.024 "trsvcid": "0" 00:14:26.024 } 00:14:26.024 ], 00:14:26.024 "allow_any_host": true, 00:14:26.024 "hosts": [], 00:14:26.024 "serial_number": "SPDK2", 00:14:26.024 "model_number": "SPDK bdev Controller", 00:14:26.024 "max_namespaces": 32, 00:14:26.024 "min_cntlid": 1, 00:14:26.024 "max_cntlid": 65519, 00:14:26.024 "namespaces": [ 00:14:26.024 { 00:14:26.024 "nsid": 1, 00:14:26.024 "bdev_name": "Malloc2", 00:14:26.024 "name": "Malloc2", 00:14:26.024 "nguid": "7CDCD264659A4BA58C613EBA86B3AA3C", 00:14:26.024 "uuid": "7cdcd264-659a-4ba5-8c61-3eba86b3aa3c" 00:14:26.024 } 00:14:26.024 ] 00:14:26.024 } 00:14:26.024 ] 00:14:26.024 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1868827 00:14:26.024 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:26.024 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:26.024 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:26.024 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:26.024 [2024-12-09 17:24:52.358216] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:14:26.024 [2024-12-09 17:24:52.358252] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868872 ] 00:14:26.024 [2024-12-09 17:24:52.398522] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:26.024 [2024-12-09 17:24:52.407426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:26.024 [2024-12-09 17:24:52.407452] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa9276e6000 00:14:26.024 [2024-12-09 17:24:52.408424] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:26.024 [2024-12-09 17:24:52.409426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:26.024 [2024-12-09 17:24:52.410433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:26.024 [2024-12-09 17:24:52.411448] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:26.024 [2024-12-09 17:24:52.412456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:26.024 [2024-12-09 17:24:52.413464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:26.024 [2024-12-09 17:24:52.414471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:26.024 [2024-12-09 17:24:52.415481] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:26.024 [2024-12-09 17:24:52.416487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:26.024 [2024-12-09 17:24:52.416498] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa9276db000 00:14:26.024 [2024-12-09 17:24:52.417413] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:26.024 [2024-12-09 17:24:52.426785] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:26.024 [2024-12-09 17:24:52.426811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:26.024 [2024-12-09 17:24:52.431903] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:26.024 [2024-12-09 17:24:52.431943] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:26.024 [2024-12-09 17:24:52.432018] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:26.024 [2024-12-09 17:24:52.432031] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:26.024 [2024-12-09 17:24:52.432035] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:26.024 [2024-12-09 17:24:52.432915] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:26.024 [2024-12-09 17:24:52.432925] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:26.024 [2024-12-09 17:24:52.432931] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:26.024 [2024-12-09 17:24:52.433912] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:26.024 [2024-12-09 17:24:52.433921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:26.024 [2024-12-09 17:24:52.433928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:26.024 [2024-12-09 17:24:52.434921] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:26.024 [2024-12-09 17:24:52.434931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:26.024 [2024-12-09 17:24:52.435927] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:26.024 [2024-12-09 17:24:52.435935] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:26.024 [2024-12-09 17:24:52.435940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:26.024 [2024-12-09 17:24:52.435946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:26.024 [2024-12-09 17:24:52.436053] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:26.024 [2024-12-09 17:24:52.436058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:26.024 [2024-12-09 17:24:52.436062] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:26.024 [2024-12-09 17:24:52.436941] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:26.024 [2024-12-09 17:24:52.437949] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:26.024 [2024-12-09 17:24:52.438958] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:26.024 [2024-12-09 17:24:52.439959] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:26.024 [2024-12-09 17:24:52.439998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:26.024 [2024-12-09 17:24:52.440973] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:26.024 [2024-12-09 17:24:52.440986] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:26.025 [2024-12-09 17:24:52.440991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.441007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:26.025 [2024-12-09 17:24:52.441015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.441028] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:26.025 [2024-12-09 17:24:52.441033] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:26.025 [2024-12-09 17:24:52.441036] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:26.025 [2024-12-09 17:24:52.441048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:26.025 [2024-12-09 17:24:52.448176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:26.025 [2024-12-09 17:24:52.448188] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:26.025 [2024-12-09 17:24:52.448196] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:26.025 [2024-12-09 17:24:52.448200] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:26.025 [2024-12-09 17:24:52.448206] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:26.025 [2024-12-09 17:24:52.448211] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:26.025 [2024-12-09 17:24:52.448216] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:26.025 [2024-12-09 17:24:52.448221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.448229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.448241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:26.025 [2024-12-09 17:24:52.454205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:26.025 [2024-12-09 17:24:52.454219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.025 [2024-12-09 17:24:52.454227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.025 [2024-12-09 17:24:52.454234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.025 [2024-12-09 17:24:52.454242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.025 [2024-12-09 17:24:52.454246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.454255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.454263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:26.025 [2024-12-09 17:24:52.464173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:26.025 [2024-12-09 17:24:52.464181] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:26.025 [2024-12-09 17:24:52.464186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.464192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.464197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.464205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:26.025 [2024-12-09 17:24:52.472173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:26.025 [2024-12-09 17:24:52.472231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.472239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.472246] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:26.025 [2024-12-09 17:24:52.472250] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:26.025 [2024-12-09 17:24:52.472253] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:26.025 [2024-12-09 17:24:52.472259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:26.025 [2024-12-09 17:24:52.480173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:26.025 [2024-12-09 17:24:52.480183] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:26.025 [2024-12-09 17:24:52.480196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.480203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.480209] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:26.025 [2024-12-09 17:24:52.480213] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:26.025 [2024-12-09 17:24:52.480216] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:26.025 [2024-12-09 17:24:52.480222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:26.025 [2024-12-09 17:24:52.488173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:26.025 [2024-12-09 17:24:52.488187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.488194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.488201] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:26.025 [2024-12-09 17:24:52.488207] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:26.025 [2024-12-09 17:24:52.488210] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:26.025 [2024-12-09 17:24:52.488215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:26.025 [2024-12-09 17:24:52.496173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:26.025 [2024-12-09 17:24:52.496183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.496189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.496196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.496203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.496208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.496213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.496217] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:26.025 [2024-12-09 17:24:52.496221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:26.025 [2024-12-09 17:24:52.496226] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:26.025 [2024-12-09 17:24:52.496242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:26.025 [2024-12-09 17:24:52.504175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:26.025 [2024-12-09 17:24:52.504188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:26.025 [2024-12-09 17:24:52.512173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:26.025 [2024-12-09 17:24:52.512186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:26.025 [2024-12-09 17:24:52.518200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:26.025 [2024-12-09 17:24:52.518213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:26.025 [2024-12-09 17:24:52.528173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:26.025 [2024-12-09 17:24:52.528191] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:26.025 [2024-12-09 17:24:52.528195] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:26.025 [2024-12-09 17:24:52.528198] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:26.025 [2024-12-09 17:24:52.528201] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:26.025 [2024-12-09 17:24:52.528204] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:26.025 [2024-12-09 17:24:52.528210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:26.025 [2024-12-09 17:24:52.528219] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:26.025 [2024-12-09 17:24:52.528223] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:26.025 [2024-12-09 17:24:52.528226] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:26.025 [2024-12-09 17:24:52.528231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:26.025 [2024-12-09 17:24:52.528237] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:26.025 [2024-12-09 17:24:52.528241] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:26.026 [2024-12-09 17:24:52.528244] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:26.026 [2024-12-09 17:24:52.528249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:26.026 [2024-12-09 17:24:52.528255] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:26.026 [2024-12-09 17:24:52.528259] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:26.026 [2024-12-09 17:24:52.528262] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:26.026 [2024-12-09 17:24:52.528267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:26.026 [2024-12-09 17:24:52.536174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:26.026 [2024-12-09 17:24:52.536189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:26.026 [2024-12-09 17:24:52.536198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:26.026 [2024-12-09 17:24:52.536204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:26.026 ===================================================== 00:14:26.026 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:26.026 ===================================================== 00:14:26.026 Controller Capabilities/Features 00:14:26.026 ================================ 00:14:26.026 Vendor ID: 4e58 00:14:26.026 Subsystem Vendor ID: 4e58 00:14:26.026 Serial Number: SPDK2 00:14:26.026 Model Number: SPDK bdev Controller 00:14:26.026 Firmware Version: 25.01 00:14:26.026 Recommended Arb Burst: 6 00:14:26.026 IEEE OUI Identifier: 8d 6b 50 00:14:26.026 Multi-path I/O 00:14:26.026 May have multiple subsystem ports: Yes 00:14:26.026 May have multiple controllers: Yes 00:14:26.026 Associated with SR-IOV VF: No 00:14:26.026 Max Data Transfer Size: 131072 00:14:26.026 Max Number of Namespaces: 32 00:14:26.026 Max Number of I/O Queues: 127 00:14:26.026 NVMe Specification Version (VS): 1.3 00:14:26.026 NVMe Specification Version (Identify): 1.3 00:14:26.026 Maximum Queue Entries: 256 00:14:26.026 Contiguous Queues Required: Yes 00:14:26.026 Arbitration Mechanisms Supported 00:14:26.026 Weighted Round Robin: Not Supported 00:14:26.026 Vendor Specific: Not Supported 00:14:26.026 Reset Timeout: 15000 ms 00:14:26.026 Doorbell Stride: 4 bytes 00:14:26.026 NVM Subsystem Reset: Not Supported 00:14:26.026 Command Sets Supported 00:14:26.026 NVM Command Set: Supported 00:14:26.026 Boot Partition: Not Supported 00:14:26.026 Memory Page Size Minimum: 4096 bytes 00:14:26.026 Memory Page Size Maximum: 4096 bytes 00:14:26.026 Persistent Memory Region: Not Supported 00:14:26.026 Optional Asynchronous Events Supported 00:14:26.026 Namespace Attribute Notices: Supported 00:14:26.026 Firmware Activation Notices: Not Supported 00:14:26.026 ANA Change Notices: Not Supported 00:14:26.026 PLE Aggregate Log Change Notices: Not Supported 00:14:26.026 LBA Status Info Alert Notices: Not Supported 00:14:26.026 EGE Aggregate Log Change Notices: Not Supported 00:14:26.026 Normal NVM Subsystem Shutdown event: Not Supported 00:14:26.026 Zone Descriptor Change Notices: Not Supported 00:14:26.026 Discovery Log Change Notices: Not Supported 00:14:26.026 Controller Attributes 00:14:26.026 128-bit Host Identifier: Supported 00:14:26.026 Non-Operational Permissive Mode: Not Supported 00:14:26.026 NVM Sets: Not Supported 00:14:26.026 Read Recovery Levels: Not Supported 00:14:26.026 Endurance Groups: Not Supported 00:14:26.026 Predictable Latency Mode: Not Supported 00:14:26.026 Traffic Based Keep ALive: Not Supported 00:14:26.026 Namespace Granularity: Not Supported 00:14:26.026 SQ Associations: Not Supported 00:14:26.026 UUID List: Not Supported 00:14:26.026 Multi-Domain Subsystem: Not Supported 00:14:26.026 Fixed Capacity Management: Not Supported 00:14:26.026 Variable Capacity Management: Not Supported 00:14:26.026 Delete Endurance Group: Not Supported 00:14:26.026 Delete NVM Set: Not Supported 00:14:26.026 Extended LBA Formats Supported: Not Supported 00:14:26.026 Flexible Data Placement Supported: Not Supported 00:14:26.026 00:14:26.026 Controller Memory Buffer Support 00:14:26.026 ================================ 00:14:26.026 Supported: No 00:14:26.026 00:14:26.026 Persistent Memory Region Support 00:14:26.026 ================================ 00:14:26.026 Supported: No 00:14:26.026 00:14:26.026 Admin Command Set Attributes 00:14:26.026 ============================ 00:14:26.026 Security Send/Receive: Not Supported 00:14:26.026 Format NVM: Not Supported 00:14:26.026 Firmware Activate/Download: Not Supported 00:14:26.026 Namespace Management: Not Supported 00:14:26.026 Device Self-Test: Not Supported 00:14:26.026 Directives: Not Supported 00:14:26.026 NVMe-MI: Not Supported 00:14:26.026 Virtualization Management: Not Supported 00:14:26.026 Doorbell Buffer Config: Not Supported 00:14:26.026 Get LBA Status Capability: Not Supported 00:14:26.026 Command & Feature Lockdown Capability: Not Supported 00:14:26.026 Abort Command Limit: 4 00:14:26.026 Async Event Request Limit: 4 00:14:26.026 Number of Firmware Slots: N/A 00:14:26.026 Firmware Slot 1 Read-Only: N/A 00:14:26.026 Firmware Activation Without Reset: N/A 00:14:26.026 Multiple Update Detection Support: N/A 00:14:26.026 Firmware Update Granularity: No Information Provided 00:14:26.026 Per-Namespace SMART Log: No 00:14:26.026 Asymmetric Namespace Access Log Page: Not Supported 00:14:26.026 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:26.026 Command Effects Log Page: Supported 00:14:26.026 Get Log Page Extended Data: Supported 00:14:26.026 Telemetry Log Pages: Not Supported 00:14:26.026 Persistent Event Log Pages: Not Supported 00:14:26.026 Supported Log Pages Log Page: May Support 00:14:26.026 Commands Supported & Effects Log Page: Not Supported 00:14:26.026 Feature Identifiers & Effects Log Page:May Support 00:14:26.026 NVMe-MI Commands & Effects Log Page: May Support 00:14:26.026 Data Area 4 for Telemetry Log: Not Supported 00:14:26.026 Error Log Page Entries Supported: 128 00:14:26.026 Keep Alive: Supported 00:14:26.026 Keep Alive Granularity: 10000 ms 00:14:26.026 00:14:26.026 NVM Command Set Attributes 00:14:26.026 ========================== 00:14:26.026 Submission Queue Entry Size 00:14:26.026 Max: 64 00:14:26.026 Min: 64 00:14:26.026 Completion Queue Entry Size 00:14:26.026 Max: 16 00:14:26.026 Min: 16 00:14:26.026 Number of Namespaces: 32 00:14:26.026 Compare Command: Supported 00:14:26.026 Write Uncorrectable Command: Not Supported 00:14:26.026 Dataset Management Command: Supported 00:14:26.026 Write Zeroes Command: Supported 00:14:26.026 Set Features Save Field: Not Supported 00:14:26.026 Reservations: Not Supported 00:14:26.026 Timestamp: Not Supported 00:14:26.026 Copy: Supported 00:14:26.026 Volatile Write Cache: Present 00:14:26.026 Atomic Write Unit (Normal): 1 00:14:26.026 Atomic Write Unit (PFail): 1 00:14:26.026 Atomic Compare & Write Unit: 1 00:14:26.026 Fused Compare & Write: Supported 00:14:26.026 Scatter-Gather List 00:14:26.026 SGL Command Set: Supported (Dword aligned) 00:14:26.026 SGL Keyed: Not Supported 00:14:26.026 SGL Bit Bucket Descriptor: Not Supported 00:14:26.026 SGL Metadata Pointer: Not Supported 00:14:26.026 Oversized SGL: Not Supported 00:14:26.026 SGL Metadata Address: Not Supported 00:14:26.026 SGL Offset: Not Supported 00:14:26.026 Transport SGL Data Block: Not Supported 00:14:26.026 Replay Protected Memory Block: Not Supported 00:14:26.026 00:14:26.026 Firmware Slot Information 00:14:26.026 ========================= 00:14:26.026 Active slot: 1 00:14:26.026 Slot 1 Firmware Revision: 25.01 00:14:26.026 00:14:26.026 00:14:26.026 Commands Supported and Effects 00:14:26.026 ============================== 00:14:26.026 Admin Commands 00:14:26.026 -------------- 00:14:26.026 Get Log Page (02h): Supported 00:14:26.026 Identify (06h): Supported 00:14:26.026 Abort (08h): Supported 00:14:26.026 Set Features (09h): Supported 00:14:26.026 Get Features (0Ah): Supported 00:14:26.026 Asynchronous Event Request (0Ch): Supported 00:14:26.026 Keep Alive (18h): Supported 00:14:26.026 I/O Commands 00:14:26.026 ------------ 00:14:26.026 Flush (00h): Supported LBA-Change 00:14:26.026 Write (01h): Supported LBA-Change 00:14:26.026 Read (02h): Supported 00:14:26.026 Compare (05h): Supported 00:14:26.026 Write Zeroes (08h): Supported LBA-Change 00:14:26.026 Dataset Management (09h): Supported LBA-Change 00:14:26.026 Copy (19h): Supported LBA-Change 00:14:26.026 00:14:26.026 Error Log 00:14:26.026 ========= 00:14:26.026 00:14:26.026 Arbitration 00:14:26.026 =========== 00:14:26.026 Arbitration Burst: 1 00:14:26.026 00:14:26.026 Power Management 00:14:26.026 ================ 00:14:26.026 Number of Power States: 1 00:14:26.026 Current Power State: Power State #0 00:14:26.026 Power State #0: 00:14:26.026 Max Power: 0.00 W 00:14:26.026 Non-Operational State: Operational 00:14:26.026 Entry Latency: Not Reported 00:14:26.026 Exit Latency: Not Reported 00:14:26.026 Relative Read Throughput: 0 00:14:26.026 Relative Read Latency: 0 00:14:26.026 Relative Write Throughput: 0 00:14:26.027 Relative Write Latency: 0 00:14:26.027 Idle Power: Not Reported 00:14:26.027 Active Power: Not Reported 00:14:26.027 Non-Operational Permissive Mode: Not Supported 00:14:26.027 00:14:26.027 Health Information 00:14:26.027 ================== 00:14:26.027 Critical Warnings: 00:14:26.027 Available Spare Space: OK 00:14:26.027 Temperature: OK 00:14:26.027 Device Reliability: OK 00:14:26.027 Read Only: No 00:14:26.027 Volatile Memory Backup: OK 00:14:26.027 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:26.027 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:26.027 Available Spare: 0% 00:14:26.027 Available Sp[2024-12-09 17:24:52.536296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:26.027 [2024-12-09 17:24:52.544173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:26.027 [2024-12-09 17:24:52.544209] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:26.027 [2024-12-09 17:24:52.544218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.027 [2024-12-09 17:24:52.544224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.027 [2024-12-09 17:24:52.544229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.027 [2024-12-09 17:24:52.544235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.027 [2024-12-09 17:24:52.544273] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:26.027 [2024-12-09 17:24:52.544282] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:26.027 [2024-12-09 17:24:52.545285] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:26.027 [2024-12-09 17:24:52.545331] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:26.027 [2024-12-09 17:24:52.545340] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:26.027 [2024-12-09 17:24:52.546300] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:26.027 [2024-12-09 17:24:52.546313] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:26.027 [2024-12-09 17:24:52.546357] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:26.027 [2024-12-09 17:24:52.547313] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:26.286 are Threshold: 0% 00:14:26.286 Life Percentage Used: 0% 00:14:26.286 Data Units Read: 0 00:14:26.286 Data Units Written: 0 00:14:26.286 Host Read Commands: 0 00:14:26.286 Host Write Commands: 0 00:14:26.286 Controller Busy Time: 0 minutes 00:14:26.286 Power Cycles: 0 00:14:26.286 Power On Hours: 0 hours 00:14:26.286 Unsafe Shutdowns: 0 00:14:26.286 Unrecoverable Media Errors: 0 00:14:26.286 Lifetime Error Log Entries: 0 00:14:26.286 Warning Temperature Time: 0 minutes 00:14:26.286 Critical Temperature Time: 0 minutes 00:14:26.286 00:14:26.286 Number of Queues 00:14:26.286 ================ 00:14:26.286 Number of I/O Submission Queues: 127 00:14:26.286 Number of I/O Completion Queues: 127 00:14:26.286 00:14:26.286 Active Namespaces 00:14:26.286 ================= 00:14:26.286 Namespace ID:1 00:14:26.286 Error Recovery Timeout: Unlimited 00:14:26.286 Command Set Identifier: NVM (00h) 00:14:26.286 Deallocate: Supported 00:14:26.286 Deallocated/Unwritten Error: Not Supported 00:14:26.286 Deallocated Read Value: Unknown 00:14:26.286 Deallocate in Write Zeroes: Not Supported 00:14:26.286 Deallocated Guard Field: 0xFFFF 00:14:26.286 Flush: Supported 00:14:26.286 Reservation: Supported 00:14:26.286 Namespace Sharing Capabilities: Multiple Controllers 00:14:26.286 Size (in LBAs): 131072 (0GiB) 00:14:26.286 Capacity (in LBAs): 131072 (0GiB) 00:14:26.286 Utilization (in LBAs): 131072 (0GiB) 00:14:26.286 NGUID: 7CDCD264659A4BA58C613EBA86B3AA3C 00:14:26.286 UUID: 7cdcd264-659a-4ba5-8c61-3eba86b3aa3c 00:14:26.286 Thin Provisioning: Not Supported 00:14:26.286 Per-NS Atomic Units: Yes 00:14:26.286 Atomic Boundary Size (Normal): 0 00:14:26.286 Atomic Boundary Size (PFail): 0 00:14:26.286 Atomic Boundary Offset: 0 00:14:26.286 Maximum Single Source Range Length: 65535 00:14:26.286 Maximum Copy Length: 65535 00:14:26.286 Maximum Source Range Count: 1 00:14:26.286 NGUID/EUI64 Never Reused: No 00:14:26.286 Namespace Write Protected: No 00:14:26.286 Number of LBA Formats: 1 00:14:26.286 Current LBA Format: LBA Format #00 00:14:26.286 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:26.286 00:14:26.286 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:26.286 [2024-12-09 17:24:52.780541] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:31.607 Initializing NVMe Controllers 00:14:31.607 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:31.607 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:31.607 Initialization complete. Launching workers. 00:14:31.607 ======================================================== 00:14:31.607 Latency(us) 00:14:31.607 Device Information : IOPS MiB/s Average min max 00:14:31.607 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39936.71 156.00 3204.90 975.32 8611.94 00:14:31.607 ======================================================== 00:14:31.607 Total : 39936.71 156.00 3204.90 975.32 8611.94 00:14:31.607 00:14:31.607 [2024-12-09 17:24:57.883428] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:31.607 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:31.607 [2024-12-09 17:24:58.127133] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:36.876 Initializing NVMe Controllers 00:14:36.876 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:36.876 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:36.876 Initialization complete. Launching workers. 00:14:36.876 ======================================================== 00:14:36.876 Latency(us) 00:14:36.876 Device Information : IOPS MiB/s Average min max 00:14:36.877 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39897.49 155.85 3208.06 979.42 8209.87 00:14:36.877 ======================================================== 00:14:36.877 Total : 39897.49 155.85 3208.06 979.42 8209.87 00:14:36.877 00:14:36.877 [2024-12-09 17:25:03.150399] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:36.877 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:36.877 [2024-12-09 17:25:03.357616] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:42.146 [2024-12-09 17:25:08.495267] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:42.146 Initializing NVMe Controllers 00:14:42.146 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:42.146 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:42.146 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:42.146 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:42.146 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:42.146 Initialization complete. Launching workers. 00:14:42.146 Starting thread on core 2 00:14:42.146 Starting thread on core 3 00:14:42.146 Starting thread on core 1 00:14:42.146 17:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:42.405 [2024-12-09 17:25:08.791658] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:45.694 [2024-12-09 17:25:12.013361] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:45.694 Initializing NVMe Controllers 00:14:45.694 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:45.694 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:45.694 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:45.694 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:45.694 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:45.694 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:45.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:45.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:45.694 Initialization complete. Launching workers. 00:14:45.694 Starting thread on core 1 with urgent priority queue 00:14:45.694 Starting thread on core 2 with urgent priority queue 00:14:45.694 Starting thread on core 3 with urgent priority queue 00:14:45.694 Starting thread on core 0 with urgent priority queue 00:14:45.694 SPDK bdev Controller (SPDK2 ) core 0: 7769.33 IO/s 12.87 secs/100000 ios 00:14:45.694 SPDK bdev Controller (SPDK2 ) core 1: 6498.67 IO/s 15.39 secs/100000 ios 00:14:45.694 SPDK bdev Controller (SPDK2 ) core 2: 5735.33 IO/s 17.44 secs/100000 ios 00:14:45.694 SPDK bdev Controller (SPDK2 ) core 3: 7294.00 IO/s 13.71 secs/100000 ios 00:14:45.694 ======================================================== 00:14:45.694 00:14:45.694 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:45.953 [2024-12-09 17:25:12.300594] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:45.953 Initializing NVMe Controllers 00:14:45.953 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:45.953 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:45.953 Namespace ID: 1 size: 0GB 00:14:45.953 Initialization complete. 00:14:45.953 INFO: using host memory buffer for IO 00:14:45.953 Hello world! 00:14:45.953 [2024-12-09 17:25:12.312667] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:45.953 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:46.213 [2024-12-09 17:25:12.591871] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.150 Initializing NVMe Controllers 00:14:47.150 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.150 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.150 Initialization complete. Launching workers. 00:14:47.150 submit (in ns) avg, min, max = 7287.3, 3141.0, 3999333.3 00:14:47.150 complete (in ns) avg, min, max = 19397.7, 1714.3, 3999888.6 00:14:47.150 00:14:47.150 Submit histogram 00:14:47.150 ================ 00:14:47.150 Range in us Cumulative Count 00:14:47.150 3.139 - 3.154: 0.0244% ( 4) 00:14:47.150 3.154 - 3.170: 0.0305% ( 1) 00:14:47.150 3.170 - 3.185: 0.0366% ( 1) 00:14:47.150 3.185 - 3.200: 0.1954% ( 26) 00:14:47.150 3.200 - 3.215: 1.0928% ( 147) 00:14:47.150 3.215 - 3.230: 3.9316% ( 465) 00:14:47.150 3.230 - 3.246: 8.2967% ( 715) 00:14:47.150 3.246 - 3.261: 12.9426% ( 761) 00:14:47.150 3.261 - 3.276: 19.0110% ( 994) 00:14:47.150 3.276 - 3.291: 26.4164% ( 1213) 00:14:47.150 3.291 - 3.307: 32.6984% ( 1029) 00:14:47.150 3.307 - 3.322: 38.8400% ( 1006) 00:14:47.150 3.322 - 3.337: 44.0781% ( 858) 00:14:47.150 3.337 - 3.352: 48.8095% ( 775) 00:14:47.150 3.352 - 3.368: 53.2723% ( 731) 00:14:47.150 3.368 - 3.383: 58.8645% ( 916) 00:14:47.150 3.383 - 3.398: 65.3724% ( 1066) 00:14:47.150 3.398 - 3.413: 70.2747% ( 803) 00:14:47.150 3.413 - 3.429: 75.9035% ( 922) 00:14:47.150 3.429 - 3.444: 80.7448% ( 793) 00:14:47.150 3.444 - 3.459: 83.7790% ( 497) 00:14:47.150 3.459 - 3.474: 85.9096% ( 349) 00:14:47.150 3.474 - 3.490: 87.3504% ( 236) 00:14:47.150 3.490 - 3.505: 88.2540% ( 148) 00:14:47.150 3.505 - 3.520: 88.9316% ( 111) 00:14:47.150 3.520 - 3.535: 89.6947% ( 125) 00:14:47.150 3.535 - 3.550: 90.3846% ( 113) 00:14:47.150 3.550 - 3.566: 91.2698% ( 145) 00:14:47.150 3.566 - 3.581: 92.0757% ( 132) 00:14:47.150 3.581 - 3.596: 92.7961% ( 118) 00:14:47.150 3.596 - 3.611: 93.4860% ( 113) 00:14:47.150 3.611 - 3.627: 94.2125% ( 119) 00:14:47.150 3.627 - 3.642: 94.9512% ( 121) 00:14:47.150 3.642 - 3.657: 95.7143% ( 125) 00:14:47.150 3.657 - 3.672: 96.4042% ( 113) 00:14:47.150 3.672 - 3.688: 97.0818% ( 111) 00:14:47.150 3.688 - 3.703: 97.7106% ( 103) 00:14:47.150 3.703 - 3.718: 98.1746% ( 76) 00:14:47.150 3.718 - 3.733: 98.6691% ( 81) 00:14:47.150 3.733 - 3.749: 99.0110% ( 56) 00:14:47.150 3.749 - 3.764: 99.2247% ( 35) 00:14:47.150 3.764 - 3.779: 99.4017% ( 29) 00:14:47.150 3.779 - 3.794: 99.4628% ( 10) 00:14:47.150 3.794 - 3.810: 99.5238% ( 10) 00:14:47.150 3.810 - 3.825: 99.5910% ( 11) 00:14:47.150 3.825 - 3.840: 99.6154% ( 4) 00:14:47.150 3.840 - 3.855: 99.6276% ( 2) 00:14:47.151 3.855 - 3.870: 99.6398% ( 2) 00:14:47.151 3.992 - 4.023: 99.6459% ( 1) 00:14:47.151 4.876 - 4.907: 99.6520% ( 1) 00:14:47.151 5.303 - 5.333: 99.6581% ( 1) 00:14:47.151 5.333 - 5.364: 99.6642% ( 1) 00:14:47.151 5.455 - 5.486: 99.6703% ( 1) 00:14:47.151 5.790 - 5.821: 99.6764% ( 1) 00:14:47.151 5.851 - 5.882: 99.6825% ( 1) 00:14:47.151 5.912 - 5.943: 99.6886% ( 1) 00:14:47.151 6.034 - 6.065: 99.6947% ( 1) 00:14:47.151 6.095 - 6.126: 99.7009% ( 1) 00:14:47.151 6.126 - 6.156: 99.7131% ( 2) 00:14:47.151 6.217 - 6.248: 99.7192% ( 1) 00:14:47.151 6.309 - 6.339: 99.7253% ( 1) 00:14:47.151 6.339 - 6.370: 99.7314% ( 1) 00:14:47.151 6.370 - 6.400: 99.7375% ( 1) 00:14:47.151 6.430 - 6.461: 99.7436% ( 1) 00:14:47.151 6.552 - 6.583: 99.7497% ( 1) 00:14:47.151 6.613 - 6.644: 99.7558% ( 1) 00:14:47.151 6.674 - 6.705: 99.7619% ( 1) 00:14:47.151 6.827 - 6.857: 99.7680% ( 1) 00:14:47.151 7.040 - 7.070: 99.7741% ( 1) 00:14:47.151 7.131 - 7.162: 99.7802% ( 1) 00:14:47.151 7.192 - 7.223: 99.7863% ( 1) 00:14:47.151 7.314 - 7.345: 99.7924% ( 1) 00:14:47.151 7.558 - 7.589: 99.8046% ( 2) 00:14:47.151 7.619 - 7.650: 99.8107% ( 1) 00:14:47.151 7.802 - 7.863: 99.8168% ( 1) 00:14:47.151 8.107 - 8.168: 99.8230% ( 1) 00:14:47.151 8.350 - 8.411: 99.8291% ( 1) 00:14:47.151 8.960 - 9.021: 99.8413% ( 2) 00:14:47.151 9.021 - 9.082: 99.8535% ( 2) 00:14:47.151 [2024-12-09 17:25:13.687161] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.408 9.143 - 9.204: 99.8596% ( 1) 00:14:47.408 9.448 - 9.509: 99.8657% ( 1) 00:14:47.409 9.691 - 9.752: 99.8718% ( 1) 00:14:47.409 9.752 - 9.813: 99.8779% ( 1) 00:14:47.409 10.484 - 10.545: 99.8840% ( 1) 00:14:47.409 12.251 - 12.312: 99.8901% ( 1) 00:14:47.409 13.653 - 13.714: 99.8962% ( 1) 00:14:47.409 19.383 - 19.505: 99.9023% ( 1) 00:14:47.409 3994.575 - 4025.783: 100.0000% ( 16) 00:14:47.409 00:14:47.409 Complete histogram 00:14:47.409 ================== 00:14:47.409 Range in us Cumulative Count 00:14:47.409 1.714 - 1.722: 0.0061% ( 1) 00:14:47.409 1.730 - 1.737: 0.0122% ( 1) 00:14:47.409 1.737 - 1.745: 0.0183% ( 1) 00:14:47.409 1.752 - 1.760: 0.0366% ( 3) 00:14:47.409 1.760 - 1.768: 0.3114% ( 45) 00:14:47.409 1.768 - 1.775: 1.4408% ( 185) 00:14:47.409 1.775 - 1.783: 2.6984% ( 206) 00:14:47.409 1.783 - 1.790: 3.6081% ( 149) 00:14:47.409 1.790 - 1.798: 4.3040% ( 114) 00:14:47.409 1.798 - 1.806: 4.7558% ( 74) 00:14:47.409 1.806 - 1.813: 6.0928% ( 219) 00:14:47.409 1.813 - 1.821: 19.3223% ( 2167) 00:14:47.409 1.821 - 1.829: 53.4615% ( 5592) 00:14:47.409 1.829 - 1.836: 79.4628% ( 4259) 00:14:47.409 1.836 - 1.844: 89.0171% ( 1565) 00:14:47.409 1.844 - 1.851: 92.8083% ( 621) 00:14:47.409 1.851 - 1.859: 95.0000% ( 359) 00:14:47.409 1.859 - 1.867: 96.1538% ( 189) 00:14:47.409 1.867 - 1.874: 96.4713% ( 52) 00:14:47.409 1.874 - 1.882: 96.7338% ( 43) 00:14:47.409 1.882 - 1.890: 97.1612% ( 70) 00:14:47.409 1.890 - 1.897: 97.6496% ( 80) 00:14:47.409 1.897 - 1.905: 98.2295% ( 95) 00:14:47.409 1.905 - 1.912: 98.7118% ( 79) 00:14:47.409 1.912 - 1.920: 99.0537% ( 56) 00:14:47.409 1.920 - 1.928: 99.2247% ( 28) 00:14:47.409 1.928 - 1.935: 99.2613% ( 6) 00:14:47.409 1.935 - 1.943: 99.3101% ( 8) 00:14:47.409 1.943 - 1.950: 99.3223% ( 2) 00:14:47.409 1.950 - 1.966: 99.3407% ( 3) 00:14:47.409 1.981 - 1.996: 99.3529% ( 2) 00:14:47.409 2.011 - 2.027: 99.3651% ( 2) 00:14:47.409 2.027 - 2.042: 99.3712% ( 1) 00:14:47.409 3.840 - 3.855: 99.3773% ( 1) 00:14:47.409 3.855 - 3.870: 99.3834% ( 1) 00:14:47.409 3.901 - 3.931: 99.3895% ( 1) 00:14:47.409 3.992 - 4.023: 99.3956% ( 1) 00:14:47.409 4.175 - 4.206: 99.4017% ( 1) 00:14:47.409 4.206 - 4.236: 99.4078% ( 1) 00:14:47.409 4.358 - 4.389: 99.4139% ( 1) 00:14:47.409 4.571 - 4.602: 99.4200% ( 1) 00:14:47.409 4.632 - 4.663: 99.4261% ( 1) 00:14:47.409 4.663 - 4.693: 99.4322% ( 1) 00:14:47.409 4.693 - 4.724: 99.4383% ( 1) 00:14:47.409 4.724 - 4.754: 99.4444% ( 1) 00:14:47.409 4.998 - 5.029: 99.4505% ( 1) 00:14:47.409 5.029 - 5.059: 99.4567% ( 1) 00:14:47.409 5.577 - 5.608: 99.4628% ( 1) 00:14:47.409 5.669 - 5.699: 99.4689% ( 1) 00:14:47.409 5.882 - 5.912: 99.4750% ( 1) 00:14:47.409 5.973 - 6.004: 99.4811% ( 1) 00:14:47.409 6.278 - 6.309: 99.4872% ( 1) 00:14:47.409 6.370 - 6.400: 99.4933% ( 1) 00:14:47.409 6.400 - 6.430: 99.4994% ( 1) 00:14:47.409 7.010 - 7.040: 99.5055% ( 1) 00:14:47.409 7.192 - 7.223: 99.5116% ( 1) 00:14:47.409 7.345 - 7.375: 99.5177% ( 1) 00:14:47.409 7.436 - 7.467: 99.5238% ( 1) 00:14:47.409 8.046 - 8.107: 99.5299% ( 1) 00:14:47.409 8.168 - 8.229: 99.5360% ( 1) 00:14:47.409 8.229 - 8.290: 99.5421% ( 1) 00:14:47.409 8.838 - 8.899: 99.5482% ( 1) 00:14:47.409 11.520 - 11.581: 99.5543% ( 1) 00:14:47.409 12.556 - 12.617: 99.5604% ( 1) 00:14:47.409 3994.575 - 4025.783: 100.0000% ( 72) 00:14:47.409 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:47.409 [ 00:14:47.409 { 00:14:47.409 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:47.409 "subtype": "Discovery", 00:14:47.409 "listen_addresses": [], 00:14:47.409 "allow_any_host": true, 00:14:47.409 "hosts": [] 00:14:47.409 }, 00:14:47.409 { 00:14:47.409 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:47.409 "subtype": "NVMe", 00:14:47.409 "listen_addresses": [ 00:14:47.409 { 00:14:47.409 "trtype": "VFIOUSER", 00:14:47.409 "adrfam": "IPv4", 00:14:47.409 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:47.409 "trsvcid": "0" 00:14:47.409 } 00:14:47.409 ], 00:14:47.409 "allow_any_host": true, 00:14:47.409 "hosts": [], 00:14:47.409 "serial_number": "SPDK1", 00:14:47.409 "model_number": "SPDK bdev Controller", 00:14:47.409 "max_namespaces": 32, 00:14:47.409 "min_cntlid": 1, 00:14:47.409 "max_cntlid": 65519, 00:14:47.409 "namespaces": [ 00:14:47.409 { 00:14:47.409 "nsid": 1, 00:14:47.409 "bdev_name": "Malloc1", 00:14:47.409 "name": "Malloc1", 00:14:47.409 "nguid": "09083B8B540F400C9C9E3A46EF6C2271", 00:14:47.409 "uuid": "09083b8b-540f-400c-9c9e-3a46ef6c2271" 00:14:47.409 }, 00:14:47.409 { 00:14:47.409 "nsid": 2, 00:14:47.409 "bdev_name": "Malloc3", 00:14:47.409 "name": "Malloc3", 00:14:47.409 "nguid": "A0C4B8A7BF304D22A99D1CDA7E0E2E03", 00:14:47.409 "uuid": "a0c4b8a7-bf30-4d22-a99d-1cda7e0e2e03" 00:14:47.409 } 00:14:47.409 ] 00:14:47.409 }, 00:14:47.409 { 00:14:47.409 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:47.409 "subtype": "NVMe", 00:14:47.409 "listen_addresses": [ 00:14:47.409 { 00:14:47.409 "trtype": "VFIOUSER", 00:14:47.409 "adrfam": "IPv4", 00:14:47.409 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:47.409 "trsvcid": "0" 00:14:47.409 } 00:14:47.409 ], 00:14:47.409 "allow_any_host": true, 00:14:47.409 "hosts": [], 00:14:47.409 "serial_number": "SPDK2", 00:14:47.409 "model_number": "SPDK bdev Controller", 00:14:47.409 "max_namespaces": 32, 00:14:47.409 "min_cntlid": 1, 00:14:47.409 "max_cntlid": 65519, 00:14:47.409 "namespaces": [ 00:14:47.409 { 00:14:47.409 "nsid": 1, 00:14:47.409 "bdev_name": "Malloc2", 00:14:47.409 "name": "Malloc2", 00:14:47.409 "nguid": "7CDCD264659A4BA58C613EBA86B3AA3C", 00:14:47.409 "uuid": "7cdcd264-659a-4ba5-8c61-3eba86b3aa3c" 00:14:47.409 } 00:14:47.409 ] 00:14:47.409 } 00:14:47.409 ] 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1872412 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:47.409 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:47.667 [2024-12-09 17:25:14.082617] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.667 Malloc4 00:14:47.667 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:47.926 [2024-12-09 17:25:14.319378] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.926 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:47.926 Asynchronous Event Request test 00:14:47.926 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.926 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.926 Registering asynchronous event callbacks... 00:14:47.926 Starting namespace attribute notice tests for all controllers... 00:14:47.926 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:47.926 aer_cb - Changed Namespace 00:14:47.926 Cleaning up... 00:14:48.186 [ 00:14:48.186 { 00:14:48.186 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:48.186 "subtype": "Discovery", 00:14:48.186 "listen_addresses": [], 00:14:48.186 "allow_any_host": true, 00:14:48.186 "hosts": [] 00:14:48.186 }, 00:14:48.186 { 00:14:48.186 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:48.186 "subtype": "NVMe", 00:14:48.186 "listen_addresses": [ 00:14:48.186 { 00:14:48.186 "trtype": "VFIOUSER", 00:14:48.186 "adrfam": "IPv4", 00:14:48.186 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:48.186 "trsvcid": "0" 00:14:48.186 } 00:14:48.186 ], 00:14:48.186 "allow_any_host": true, 00:14:48.186 "hosts": [], 00:14:48.186 "serial_number": "SPDK1", 00:14:48.186 "model_number": "SPDK bdev Controller", 00:14:48.186 "max_namespaces": 32, 00:14:48.186 "min_cntlid": 1, 00:14:48.186 "max_cntlid": 65519, 00:14:48.186 "namespaces": [ 00:14:48.186 { 00:14:48.186 "nsid": 1, 00:14:48.186 "bdev_name": "Malloc1", 00:14:48.186 "name": "Malloc1", 00:14:48.186 "nguid": "09083B8B540F400C9C9E3A46EF6C2271", 00:14:48.186 "uuid": "09083b8b-540f-400c-9c9e-3a46ef6c2271" 00:14:48.186 }, 00:14:48.186 { 00:14:48.186 "nsid": 2, 00:14:48.186 "bdev_name": "Malloc3", 00:14:48.186 "name": "Malloc3", 00:14:48.186 "nguid": "A0C4B8A7BF304D22A99D1CDA7E0E2E03", 00:14:48.186 "uuid": "a0c4b8a7-bf30-4d22-a99d-1cda7e0e2e03" 00:14:48.186 } 00:14:48.186 ] 00:14:48.186 }, 00:14:48.186 { 00:14:48.186 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:48.186 "subtype": "NVMe", 00:14:48.186 "listen_addresses": [ 00:14:48.186 { 00:14:48.186 "trtype": "VFIOUSER", 00:14:48.186 "adrfam": "IPv4", 00:14:48.186 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:48.186 "trsvcid": "0" 00:14:48.186 } 00:14:48.186 ], 00:14:48.186 "allow_any_host": true, 00:14:48.186 "hosts": [], 00:14:48.186 "serial_number": "SPDK2", 00:14:48.186 "model_number": "SPDK bdev Controller", 00:14:48.186 "max_namespaces": 32, 00:14:48.186 "min_cntlid": 1, 00:14:48.186 "max_cntlid": 65519, 00:14:48.186 "namespaces": [ 00:14:48.186 { 00:14:48.186 "nsid": 1, 00:14:48.186 "bdev_name": "Malloc2", 00:14:48.186 "name": "Malloc2", 00:14:48.186 "nguid": "7CDCD264659A4BA58C613EBA86B3AA3C", 00:14:48.186 "uuid": "7cdcd264-659a-4ba5-8c61-3eba86b3aa3c" 00:14:48.186 }, 00:14:48.186 { 00:14:48.186 "nsid": 2, 00:14:48.186 "bdev_name": "Malloc4", 00:14:48.186 "name": "Malloc4", 00:14:48.186 "nguid": "ED340C1CE10245939DCC3E6A63BFF7D4", 00:14:48.186 "uuid": "ed340c1c-e102-4593-9dcc-3e6a63bff7d4" 00:14:48.186 } 00:14:48.186 ] 00:14:48.186 } 00:14:48.186 ] 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1872412 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1864789 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1864789 ']' 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1864789 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1864789 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1864789' 00:14:48.186 killing process with pid 1864789 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1864789 00:14:48.186 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1864789 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1872644 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1872644' 00:14:48.446 Process pid: 1872644 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1872644 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1872644 ']' 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.446 17:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:48.446 [2024-12-09 17:25:14.893333] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:48.446 [2024-12-09 17:25:14.894175] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:14:48.446 [2024-12-09 17:25:14.894213] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.446 [2024-12-09 17:25:14.965846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.706 [2024-12-09 17:25:15.003327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.706 [2024-12-09 17:25:15.003361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.706 [2024-12-09 17:25:15.003367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.707 [2024-12-09 17:25:15.003373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.707 [2024-12-09 17:25:15.003377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.707 [2024-12-09 17:25:15.004785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.707 [2024-12-09 17:25:15.004893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.707 [2024-12-09 17:25:15.004998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.707 [2024-12-09 17:25:15.005000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.707 [2024-12-09 17:25:15.072672] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:48.707 [2024-12-09 17:25:15.073597] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:48.707 [2024-12-09 17:25:15.073651] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:48.707 [2024-12-09 17:25:15.073828] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:48.707 [2024-12-09 17:25:15.073889] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:48.707 17:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.707 17:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:48.707 17:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:49.645 17:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:49.904 17:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:49.904 17:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:49.904 17:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.904 17:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:49.904 17:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:50.163 Malloc1 00:14:50.163 17:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:50.421 17:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:50.421 17:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:50.680 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:50.680 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:50.680 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:50.939 Malloc2 00:14:50.939 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:51.198 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1872644 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1872644 ']' 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1872644 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1872644 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1872644' 00:14:51.458 killing process with pid 1872644 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1872644 00:14:51.458 17:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1872644 00:14:51.718 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:51.718 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:51.718 00:14:51.718 real 0m50.911s 00:14:51.718 user 3m16.818s 00:14:51.718 sys 0m3.264s 00:14:51.718 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.718 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:51.718 ************************************ 00:14:51.718 END TEST nvmf_vfio_user 00:14:51.718 ************************************ 00:14:51.718 17:25:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:51.718 17:25:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.718 17:25:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.718 17:25:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.979 ************************************ 00:14:51.979 START TEST nvmf_vfio_user_nvme_compliance 00:14:51.979 ************************************ 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:51.979 * Looking for test storage... 00:14:51.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.979 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:51.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.980 --rc genhtml_branch_coverage=1 00:14:51.980 --rc genhtml_function_coverage=1 00:14:51.980 --rc genhtml_legend=1 00:14:51.980 --rc geninfo_all_blocks=1 00:14:51.980 --rc geninfo_unexecuted_blocks=1 00:14:51.980 00:14:51.980 ' 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:51.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.980 --rc genhtml_branch_coverage=1 00:14:51.980 --rc genhtml_function_coverage=1 00:14:51.980 --rc genhtml_legend=1 00:14:51.980 --rc geninfo_all_blocks=1 00:14:51.980 --rc geninfo_unexecuted_blocks=1 00:14:51.980 00:14:51.980 ' 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:51.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.980 --rc genhtml_branch_coverage=1 00:14:51.980 --rc genhtml_function_coverage=1 00:14:51.980 --rc genhtml_legend=1 00:14:51.980 --rc geninfo_all_blocks=1 00:14:51.980 --rc geninfo_unexecuted_blocks=1 00:14:51.980 00:14:51.980 ' 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:51.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.980 --rc genhtml_branch_coverage=1 00:14:51.980 --rc genhtml_function_coverage=1 00:14:51.980 --rc genhtml_legend=1 00:14:51.980 --rc geninfo_all_blocks=1 00:14:51.980 --rc geninfo_unexecuted_blocks=1 00:14:51.980 00:14:51.980 ' 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:51.980 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:51.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1873263 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1873263' 00:14:51.981 Process pid: 1873263 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1873263 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1873263 ']' 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.981 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:52.240 [2024-12-09 17:25:18.522698] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:14:52.240 [2024-12-09 17:25:18.522748] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.240 [2024-12-09 17:25:18.595326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.240 [2024-12-09 17:25:18.635425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.240 [2024-12-09 17:25:18.635461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.240 [2024-12-09 17:25:18.635468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.240 [2024-12-09 17:25:18.635474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.240 [2024-12-09 17:25:18.635479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.240 [2024-12-09 17:25:18.636774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.240 [2024-12-09 17:25:18.636880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.240 [2024-12-09 17:25:18.636882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.240 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.240 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:52.240 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:53.621 malloc0 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.621 17:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:53.621 00:14:53.621 00:14:53.621 CUnit - A unit testing framework for C - Version 2.1-3 00:14:53.621 http://cunit.sourceforge.net/ 00:14:53.621 00:14:53.621 00:14:53.621 Suite: nvme_compliance 00:14:53.621 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 17:25:19.963584] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.621 [2024-12-09 17:25:19.964933] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:53.621 [2024-12-09 17:25:19.964948] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:53.622 [2024-12-09 17:25:19.964954] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:53.622 [2024-12-09 17:25:19.966613] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.622 passed 00:14:53.622 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 17:25:20.043175] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.622 [2024-12-09 17:25:20.046194] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.622 passed 00:14:53.622 Test: admin_identify_ns ...[2024-12-09 17:25:20.125720] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.881 [2024-12-09 17:25:20.185176] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:53.881 [2024-12-09 17:25:20.193181] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:53.881 [2024-12-09 17:25:20.214276] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.881 passed 00:14:53.881 Test: admin_get_features_mandatory_features ...[2024-12-09 17:25:20.293964] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.881 [2024-12-09 17:25:20.296985] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.881 passed 00:14:53.881 Test: admin_get_features_optional_features ...[2024-12-09 17:25:20.374521] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.881 [2024-12-09 17:25:20.377539] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.881 passed 00:14:54.140 Test: admin_set_features_number_of_queues ...[2024-12-09 17:25:20.452491] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.140 [2024-12-09 17:25:20.561350] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.140 passed 00:14:54.140 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 17:25:20.635107] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.140 [2024-12-09 17:25:20.638134] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.140 passed 00:14:54.399 Test: admin_get_log_page_with_lpo ...[2024-12-09 17:25:20.715434] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.399 [2024-12-09 17:25:20.784180] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:54.399 [2024-12-09 17:25:20.797230] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.399 passed 00:14:54.399 Test: fabric_property_get ...[2024-12-09 17:25:20.869964] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.399 [2024-12-09 17:25:20.871196] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:54.399 [2024-12-09 17:25:20.874995] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.399 passed 00:14:54.657 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 17:25:20.953521] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.657 [2024-12-09 17:25:20.954753] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:54.657 [2024-12-09 17:25:20.956546] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.657 passed 00:14:54.657 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 17:25:21.030252] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.657 [2024-12-09 17:25:21.118182] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:54.657 [2024-12-09 17:25:21.134182] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:54.657 [2024-12-09 17:25:21.139262] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.657 passed 00:14:54.916 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 17:25:21.213037] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.916 [2024-12-09 17:25:21.214300] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:54.916 [2024-12-09 17:25:21.216054] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.916 passed 00:14:54.916 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 17:25:21.292822] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.916 [2024-12-09 17:25:21.369180] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:54.916 [2024-12-09 17:25:21.393178] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:54.916 [2024-12-09 17:25:21.398262] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.916 passed 00:14:55.175 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 17:25:21.475046] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.175 [2024-12-09 17:25:21.476279] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:55.175 [2024-12-09 17:25:21.476309] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:55.175 [2024-12-09 17:25:21.478076] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.175 passed 00:14:55.175 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 17:25:21.553752] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.175 [2024-12-09 17:25:21.645178] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:55.176 [2024-12-09 17:25:21.653182] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:55.176 [2024-12-09 17:25:21.661173] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:55.176 [2024-12-09 17:25:21.669184] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:55.176 [2024-12-09 17:25:21.698258] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.434 passed 00:14:55.434 Test: admin_create_io_sq_verify_pc ...[2024-12-09 17:25:21.773914] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.434 [2024-12-09 17:25:21.790180] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:55.434 [2024-12-09 17:25:21.808089] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.434 passed 00:14:55.434 Test: admin_create_io_qp_max_qps ...[2024-12-09 17:25:21.885643] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:56.812 [2024-12-09 17:25:22.991176] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:57.071 [2024-12-09 17:25:23.374528] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.071 passed 00:14:57.071 Test: admin_create_io_sq_shared_cq ...[2024-12-09 17:25:23.451449] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.071 [2024-12-09 17:25:23.591173] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:57.330 [2024-12-09 17:25:23.628242] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.330 passed 00:14:57.330 00:14:57.330 Run Summary: Type Total Ran Passed Failed Inactive 00:14:57.330 suites 1 1 n/a 0 0 00:14:57.330 tests 18 18 18 0 0 00:14:57.330 asserts 360 360 360 0 n/a 00:14:57.330 00:14:57.330 Elapsed time = 1.507 seconds 00:14:57.330 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1873263 00:14:57.330 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1873263 ']' 00:14:57.330 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1873263 00:14:57.330 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:57.330 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.330 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1873263 00:14:57.330 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.330 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.330 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1873263' 00:14:57.330 killing process with pid 1873263 00:14:57.330 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1873263 00:14:57.330 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1873263 00:14:57.590 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:57.590 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:57.590 00:14:57.590 real 0m5.638s 00:14:57.590 user 0m15.815s 00:14:57.590 sys 0m0.509s 00:14:57.590 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:57.590 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:57.590 ************************************ 00:14:57.590 END TEST nvmf_vfio_user_nvme_compliance 00:14:57.590 ************************************ 00:14:57.590 17:25:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:57.590 17:25:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:57.590 17:25:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:57.590 17:25:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:57.590 ************************************ 00:14:57.590 START TEST nvmf_vfio_user_fuzz 00:14:57.590 ************************************ 00:14:57.590 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:57.590 * Looking for test storage... 00:14:57.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.590 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:57.590 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:14:57.590 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:57.590 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:57.590 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:57.590 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:57.590 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:57.590 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:57.590 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:57.590 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:57.590 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:57.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.850 --rc genhtml_branch_coverage=1 00:14:57.850 --rc genhtml_function_coverage=1 00:14:57.850 --rc genhtml_legend=1 00:14:57.850 --rc geninfo_all_blocks=1 00:14:57.850 --rc geninfo_unexecuted_blocks=1 00:14:57.850 00:14:57.850 ' 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:57.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.850 --rc genhtml_branch_coverage=1 00:14:57.850 --rc genhtml_function_coverage=1 00:14:57.850 --rc genhtml_legend=1 00:14:57.850 --rc geninfo_all_blocks=1 00:14:57.850 --rc geninfo_unexecuted_blocks=1 00:14:57.850 00:14:57.850 ' 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:57.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.850 --rc genhtml_branch_coverage=1 00:14:57.850 --rc genhtml_function_coverage=1 00:14:57.850 --rc genhtml_legend=1 00:14:57.850 --rc geninfo_all_blocks=1 00:14:57.850 --rc geninfo_unexecuted_blocks=1 00:14:57.850 00:14:57.850 ' 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:57.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.850 --rc genhtml_branch_coverage=1 00:14:57.850 --rc genhtml_function_coverage=1 00:14:57.850 --rc genhtml_legend=1 00:14:57.850 --rc geninfo_all_blocks=1 00:14:57.850 --rc geninfo_unexecuted_blocks=1 00:14:57.850 00:14:57.850 ' 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.850 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:57.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1874346 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1874346' 00:14:57.851 Process pid: 1874346 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1874346 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1874346 ']' 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.851 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:58.110 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.110 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:58.110 17:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:59.048 malloc0 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:59.048 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:31.130 Fuzzing completed. Shutting down the fuzz application 00:15:31.130 00:15:31.130 Dumping successful admin opcodes: 00:15:31.130 9, 10, 00:15:31.130 Dumping successful io opcodes: 00:15:31.130 0, 00:15:31.130 NS: 0x20000081ef00 I/O qp, Total commands completed: 992611, total successful commands: 3887, random_seed: 2136198400 00:15:31.130 NS: 0x20000081ef00 admin qp, Total commands completed: 240832, total successful commands: 56, random_seed: 4256380736 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1874346 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1874346 ']' 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1874346 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1874346 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1874346' 00:15:31.130 killing process with pid 1874346 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1874346 00:15:31.130 17:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1874346 00:15:31.130 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:31.130 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:31.130 00:15:31.130 real 0m32.206s 00:15:31.130 user 0m29.248s 00:15:31.130 sys 0m31.599s 00:15:31.130 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.130 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:31.130 ************************************ 00:15:31.130 END TEST nvmf_vfio_user_fuzz 00:15:31.130 ************************************ 00:15:31.130 17:25:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:31.130 17:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:31.130 17:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.130 17:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.130 ************************************ 00:15:31.130 START TEST nvmf_auth_target 00:15:31.130 ************************************ 00:15:31.130 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:31.130 * Looking for test storage... 00:15:31.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.130 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:31.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.131 --rc genhtml_branch_coverage=1 00:15:31.131 --rc genhtml_function_coverage=1 00:15:31.131 --rc genhtml_legend=1 00:15:31.131 --rc geninfo_all_blocks=1 00:15:31.131 --rc geninfo_unexecuted_blocks=1 00:15:31.131 00:15:31.131 ' 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:31.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.131 --rc genhtml_branch_coverage=1 00:15:31.131 --rc genhtml_function_coverage=1 00:15:31.131 --rc genhtml_legend=1 00:15:31.131 --rc geninfo_all_blocks=1 00:15:31.131 --rc geninfo_unexecuted_blocks=1 00:15:31.131 00:15:31.131 ' 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:31.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.131 --rc genhtml_branch_coverage=1 00:15:31.131 --rc genhtml_function_coverage=1 00:15:31.131 --rc genhtml_legend=1 00:15:31.131 --rc geninfo_all_blocks=1 00:15:31.131 --rc geninfo_unexecuted_blocks=1 00:15:31.131 00:15:31.131 ' 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:31.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.131 --rc genhtml_branch_coverage=1 00:15:31.131 --rc genhtml_function_coverage=1 00:15:31.131 --rc genhtml_legend=1 00:15:31.131 --rc geninfo_all_blocks=1 00:15:31.131 --rc geninfo_unexecuted_blocks=1 00:15:31.131 00:15:31.131 ' 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:31.131 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:31.132 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:36.408 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:36.408 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.408 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:36.409 Found net devices under 0000:af:00.0: cvl_0_0 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:36.409 Found net devices under 0000:af:00.1: cvl_0_1 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:36.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:15:36.409 00:15:36.409 --- 10.0.0.2 ping statistics --- 00:15:36.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.409 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:15:36.409 00:15:36.409 --- 10.0.0.1 ping statistics --- 00:15:36.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.409 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1882597 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1882597 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1882597 ']' 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1882693 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a8e82526cca401c077a2871efaca280ef9b8e30a37461b01 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.GIT 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a8e82526cca401c077a2871efaca280ef9b8e30a37461b01 0 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a8e82526cca401c077a2871efaca280ef9b8e30a37461b01 0 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a8e82526cca401c077a2871efaca280ef9b8e30a37461b01 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.GIT 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.GIT 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.GIT 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:36.409 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0333f503f33177c37cbab1d04c8c723439d730f8b518a3bbc53fb748647e8cd3 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.JKB 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0333f503f33177c37cbab1d04c8c723439d730f8b518a3bbc53fb748647e8cd3 3 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0333f503f33177c37cbab1d04c8c723439d730f8b518a3bbc53fb748647e8cd3 3 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0333f503f33177c37cbab1d04c8c723439d730f8b518a3bbc53fb748647e8cd3 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.JKB 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.JKB 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.JKB 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ad7bdf6229236a926878752c489fb986 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.hfT 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ad7bdf6229236a926878752c489fb986 1 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ad7bdf6229236a926878752c489fb986 1 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ad7bdf6229236a926878752c489fb986 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:36.410 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.hfT 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.hfT 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.hfT 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=69f3c105b6ec106b7e860d6f145473aeedb49de244b0d742 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1gI 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 69f3c105b6ec106b7e860d6f145473aeedb49de244b0d742 2 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 69f3c105b6ec106b7e860d6f145473aeedb49de244b0d742 2 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=69f3c105b6ec106b7e860d6f145473aeedb49de244b0d742 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:36.669 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1gI 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1gI 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.1gI 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=90272623f54bef7d0c6c2c2e56378e83e8894e6feb92f782 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Zwa 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 90272623f54bef7d0c6c2c2e56378e83e8894e6feb92f782 2 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 90272623f54bef7d0c6c2c2e56378e83e8894e6feb92f782 2 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=90272623f54bef7d0c6c2c2e56378e83e8894e6feb92f782 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Zwa 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Zwa 00:15:36.669 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Zwa 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=64802621501aa2e59dba5206f62706ac 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.J7w 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 64802621501aa2e59dba5206f62706ac 1 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 64802621501aa2e59dba5206f62706ac 1 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=64802621501aa2e59dba5206f62706ac 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.J7w 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.J7w 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.J7w 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9f9d930d203c4d1ad401c8cc72633c2e6f666a4f67801854b0ad30c54eb3d8a1 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.W6A 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9f9d930d203c4d1ad401c8cc72633c2e6f666a4f67801854b0ad30c54eb3d8a1 3 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9f9d930d203c4d1ad401c8cc72633c2e6f666a4f67801854b0ad30c54eb3d8a1 3 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9f9d930d203c4d1ad401c8cc72633c2e6f666a4f67801854b0ad30c54eb3d8a1 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:36.670 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.W6A 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.W6A 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.W6A 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1882597 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1882597 ']' 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1882693 /var/tmp/host.sock 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1882693 ']' 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:36.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.929 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GIT 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.GIT 00:15:37.188 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.GIT 00:15:37.447 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.JKB ]] 00:15:37.447 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JKB 00:15:37.447 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.447 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.447 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.447 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JKB 00:15:37.447 17:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JKB 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.hfT 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.hfT 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.hfT 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.1gI ]] 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1gI 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1gI 00:15:37.706 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1gI 00:15:37.965 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:37.965 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Zwa 00:15:37.965 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.965 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.965 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.965 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Zwa 00:15:37.965 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Zwa 00:15:38.224 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.J7w ]] 00:15:38.224 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J7w 00:15:38.224 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.224 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.224 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.224 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J7w 00:15:38.224 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J7w 00:15:38.483 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:38.483 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.W6A 00:15:38.483 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.483 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.483 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.483 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.W6A 00:15:38.483 17:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.W6A 00:15:38.745 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:38.745 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:38.745 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.745 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.745 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.745 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.745 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:38.745 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.745 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.745 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:38.745 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:38.745 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.746 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.746 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.746 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.746 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.746 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.746 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.746 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.052 00:15:39.053 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.053 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.053 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.342 { 00:15:39.342 "cntlid": 1, 00:15:39.342 "qid": 0, 00:15:39.342 "state": "enabled", 00:15:39.342 "thread": "nvmf_tgt_poll_group_000", 00:15:39.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:39.342 "listen_address": { 00:15:39.342 "trtype": "TCP", 00:15:39.342 "adrfam": "IPv4", 00:15:39.342 "traddr": "10.0.0.2", 00:15:39.342 "trsvcid": "4420" 00:15:39.342 }, 00:15:39.342 "peer_address": { 00:15:39.342 "trtype": "TCP", 00:15:39.342 "adrfam": "IPv4", 00:15:39.342 "traddr": "10.0.0.1", 00:15:39.342 "trsvcid": "44146" 00:15:39.342 }, 00:15:39.342 "auth": { 00:15:39.342 "state": "completed", 00:15:39.342 "digest": "sha256", 00:15:39.342 "dhgroup": "null" 00:15:39.342 } 00:15:39.342 } 00:15:39.342 ]' 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.342 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.601 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:15:39.601 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:15:40.169 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.169 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:40.169 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.169 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.169 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.169 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.169 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:40.169 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.428 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.687 00:15:40.687 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.687 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.687 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.946 { 00:15:40.946 "cntlid": 3, 00:15:40.946 "qid": 0, 00:15:40.946 "state": "enabled", 00:15:40.946 "thread": "nvmf_tgt_poll_group_000", 00:15:40.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:40.946 "listen_address": { 00:15:40.946 "trtype": "TCP", 00:15:40.946 "adrfam": "IPv4", 00:15:40.946 "traddr": "10.0.0.2", 00:15:40.946 "trsvcid": "4420" 00:15:40.946 }, 00:15:40.946 "peer_address": { 00:15:40.946 "trtype": "TCP", 00:15:40.946 "adrfam": "IPv4", 00:15:40.946 "traddr": "10.0.0.1", 00:15:40.946 "trsvcid": "48698" 00:15:40.946 }, 00:15:40.946 "auth": { 00:15:40.946 "state": "completed", 00:15:40.946 "digest": "sha256", 00:15:40.946 "dhgroup": "null" 00:15:40.946 } 00:15:40.946 } 00:15:40.946 ]' 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.946 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.205 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:15:41.205 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:15:41.771 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.771 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:41.771 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.771 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.771 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.771 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.771 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:41.771 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.030 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.288 00:15:42.288 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.288 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.288 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.288 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.288 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.288 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.288 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.288 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.288 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.288 { 00:15:42.288 "cntlid": 5, 00:15:42.288 "qid": 0, 00:15:42.288 "state": "enabled", 00:15:42.288 "thread": "nvmf_tgt_poll_group_000", 00:15:42.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:42.288 "listen_address": { 00:15:42.288 "trtype": "TCP", 00:15:42.288 "adrfam": "IPv4", 00:15:42.288 "traddr": "10.0.0.2", 00:15:42.288 "trsvcid": "4420" 00:15:42.288 }, 00:15:42.288 "peer_address": { 00:15:42.288 "trtype": "TCP", 00:15:42.288 "adrfam": "IPv4", 00:15:42.288 "traddr": "10.0.0.1", 00:15:42.288 "trsvcid": "48744" 00:15:42.288 }, 00:15:42.288 "auth": { 00:15:42.288 "state": "completed", 00:15:42.288 "digest": "sha256", 00:15:42.288 "dhgroup": "null" 00:15:42.288 } 00:15:42.288 } 00:15:42.288 ]' 00:15:42.547 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.547 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.547 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.547 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:42.547 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.547 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.547 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.547 17:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.805 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:15:42.805 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:15:43.374 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.374 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:43.374 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.374 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.374 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.374 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.374 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:43.374 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.632 17:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.891 00:15:43.891 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.891 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.891 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.150 { 00:15:44.150 "cntlid": 7, 00:15:44.150 "qid": 0, 00:15:44.150 "state": "enabled", 00:15:44.150 "thread": "nvmf_tgt_poll_group_000", 00:15:44.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:44.150 "listen_address": { 00:15:44.150 "trtype": "TCP", 00:15:44.150 "adrfam": "IPv4", 00:15:44.150 "traddr": "10.0.0.2", 00:15:44.150 "trsvcid": "4420" 00:15:44.150 }, 00:15:44.150 "peer_address": { 00:15:44.150 "trtype": "TCP", 00:15:44.150 "adrfam": "IPv4", 00:15:44.150 "traddr": "10.0.0.1", 00:15:44.150 "trsvcid": "48764" 00:15:44.150 }, 00:15:44.150 "auth": { 00:15:44.150 "state": "completed", 00:15:44.150 "digest": "sha256", 00:15:44.150 "dhgroup": "null" 00:15:44.150 } 00:15:44.150 } 00:15:44.150 ]' 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.150 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.409 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:15:44.409 17:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:15:44.974 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.975 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:44.975 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.975 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.975 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.975 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.975 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.975 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:44.975 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.239 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.498 00:15:45.498 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.498 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.498 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.498 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.498 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.498 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.498 17:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.498 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.498 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.498 { 00:15:45.498 "cntlid": 9, 00:15:45.498 "qid": 0, 00:15:45.498 "state": "enabled", 00:15:45.498 "thread": "nvmf_tgt_poll_group_000", 00:15:45.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:45.498 "listen_address": { 00:15:45.498 "trtype": "TCP", 00:15:45.498 "adrfam": "IPv4", 00:15:45.498 "traddr": "10.0.0.2", 00:15:45.498 "trsvcid": "4420" 00:15:45.498 }, 00:15:45.498 "peer_address": { 00:15:45.498 "trtype": "TCP", 00:15:45.498 "adrfam": "IPv4", 00:15:45.498 "traddr": "10.0.0.1", 00:15:45.498 "trsvcid": "48798" 00:15:45.498 }, 00:15:45.498 "auth": { 00:15:45.498 "state": "completed", 00:15:45.498 "digest": "sha256", 00:15:45.498 "dhgroup": "ffdhe2048" 00:15:45.498 } 00:15:45.498 } 00:15:45.498 ]' 00:15:45.498 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.757 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.757 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.757 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.757 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.757 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.757 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.757 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.757 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:15:45.757 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:15:46.325 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.325 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:46.325 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.325 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.584 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.584 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.584 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.584 17:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.584 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.843 00:15:46.843 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.843 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.843 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.102 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.102 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.102 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.102 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.102 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.102 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.102 { 00:15:47.102 "cntlid": 11, 00:15:47.102 "qid": 0, 00:15:47.102 "state": "enabled", 00:15:47.102 "thread": "nvmf_tgt_poll_group_000", 00:15:47.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:47.102 "listen_address": { 00:15:47.102 "trtype": "TCP", 00:15:47.102 "adrfam": "IPv4", 00:15:47.102 "traddr": "10.0.0.2", 00:15:47.102 "trsvcid": "4420" 00:15:47.102 }, 00:15:47.102 "peer_address": { 00:15:47.102 "trtype": "TCP", 00:15:47.102 "adrfam": "IPv4", 00:15:47.102 "traddr": "10.0.0.1", 00:15:47.102 "trsvcid": "48820" 00:15:47.102 }, 00:15:47.102 "auth": { 00:15:47.102 "state": "completed", 00:15:47.102 "digest": "sha256", 00:15:47.102 "dhgroup": "ffdhe2048" 00:15:47.102 } 00:15:47.102 } 00:15:47.102 ]' 00:15:47.102 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.102 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.102 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.102 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.102 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.361 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.361 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.361 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.361 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:15:47.361 17:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:15:47.926 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.926 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:47.926 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.926 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.926 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.926 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.926 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:47.926 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.184 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.442 00:15:48.442 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.442 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.442 17:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.705 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.705 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.705 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.705 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.706 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.706 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.706 { 00:15:48.706 "cntlid": 13, 00:15:48.706 "qid": 0, 00:15:48.706 "state": "enabled", 00:15:48.706 "thread": "nvmf_tgt_poll_group_000", 00:15:48.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:48.706 "listen_address": { 00:15:48.706 "trtype": "TCP", 00:15:48.706 "adrfam": "IPv4", 00:15:48.706 "traddr": "10.0.0.2", 00:15:48.706 "trsvcid": "4420" 00:15:48.706 }, 00:15:48.706 "peer_address": { 00:15:48.706 "trtype": "TCP", 00:15:48.706 "adrfam": "IPv4", 00:15:48.706 "traddr": "10.0.0.1", 00:15:48.706 "trsvcid": "48844" 00:15:48.706 }, 00:15:48.706 "auth": { 00:15:48.706 "state": "completed", 00:15:48.706 "digest": "sha256", 00:15:48.706 "dhgroup": "ffdhe2048" 00:15:48.706 } 00:15:48.706 } 00:15:48.706 ]' 00:15:48.706 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.706 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.706 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.706 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:48.706 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.706 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.706 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.706 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.965 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:15:48.965 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:15:49.533 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.533 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:49.533 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.533 17:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.533 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.533 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.533 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:49.533 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.792 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.051 00:15:50.051 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.051 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.051 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.310 { 00:15:50.310 "cntlid": 15, 00:15:50.310 "qid": 0, 00:15:50.310 "state": "enabled", 00:15:50.310 "thread": "nvmf_tgt_poll_group_000", 00:15:50.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:50.310 "listen_address": { 00:15:50.310 "trtype": "TCP", 00:15:50.310 "adrfam": "IPv4", 00:15:50.310 "traddr": "10.0.0.2", 00:15:50.310 "trsvcid": "4420" 00:15:50.310 }, 00:15:50.310 "peer_address": { 00:15:50.310 "trtype": "TCP", 00:15:50.310 "adrfam": "IPv4", 00:15:50.310 "traddr": "10.0.0.1", 00:15:50.310 "trsvcid": "48868" 00:15:50.310 }, 00:15:50.310 "auth": { 00:15:50.310 "state": "completed", 00:15:50.310 "digest": "sha256", 00:15:50.310 "dhgroup": "ffdhe2048" 00:15:50.310 } 00:15:50.310 } 00:15:50.310 ]' 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.310 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.569 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:15:50.569 17:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:15:51.135 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.135 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:51.136 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.136 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.136 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.136 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.136 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.136 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.136 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.394 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:51.394 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.394 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.394 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:51.395 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.395 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.395 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.395 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.395 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.395 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.395 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.395 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.395 17:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.654 00:15:51.654 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.654 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.654 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.913 { 00:15:51.913 "cntlid": 17, 00:15:51.913 "qid": 0, 00:15:51.913 "state": "enabled", 00:15:51.913 "thread": "nvmf_tgt_poll_group_000", 00:15:51.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:51.913 "listen_address": { 00:15:51.913 "trtype": "TCP", 00:15:51.913 "adrfam": "IPv4", 00:15:51.913 "traddr": "10.0.0.2", 00:15:51.913 "trsvcid": "4420" 00:15:51.913 }, 00:15:51.913 "peer_address": { 00:15:51.913 "trtype": "TCP", 00:15:51.913 "adrfam": "IPv4", 00:15:51.913 "traddr": "10.0.0.1", 00:15:51.913 "trsvcid": "60362" 00:15:51.913 }, 00:15:51.913 "auth": { 00:15:51.913 "state": "completed", 00:15:51.913 "digest": "sha256", 00:15:51.913 "dhgroup": "ffdhe3072" 00:15:51.913 } 00:15:51.913 } 00:15:51.913 ]' 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.913 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.172 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:15:52.172 17:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:15:52.739 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.739 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:52.739 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.739 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.739 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.739 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.739 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.739 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.998 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.257 00:15:53.257 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.257 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.257 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.257 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.257 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.257 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.515 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.515 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.515 { 00:15:53.515 "cntlid": 19, 00:15:53.515 "qid": 0, 00:15:53.515 "state": "enabled", 00:15:53.515 "thread": "nvmf_tgt_poll_group_000", 00:15:53.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:53.515 "listen_address": { 00:15:53.515 "trtype": "TCP", 00:15:53.515 "adrfam": "IPv4", 00:15:53.515 "traddr": "10.0.0.2", 00:15:53.515 "trsvcid": "4420" 00:15:53.515 }, 00:15:53.515 "peer_address": { 00:15:53.515 "trtype": "TCP", 00:15:53.515 "adrfam": "IPv4", 00:15:53.515 "traddr": "10.0.0.1", 00:15:53.515 "trsvcid": "60392" 00:15:53.515 }, 00:15:53.515 "auth": { 00:15:53.515 "state": "completed", 00:15:53.515 "digest": "sha256", 00:15:53.515 "dhgroup": "ffdhe3072" 00:15:53.515 } 00:15:53.515 } 00:15:53.515 ]' 00:15:53.515 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.515 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.515 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.515 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.515 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.515 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.515 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.515 17:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.774 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:15:53.774 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.377 17:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.636 00:15:54.895 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.895 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.895 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.895 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.895 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.895 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.895 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.895 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.895 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.895 { 00:15:54.895 "cntlid": 21, 00:15:54.895 "qid": 0, 00:15:54.895 "state": "enabled", 00:15:54.895 "thread": "nvmf_tgt_poll_group_000", 00:15:54.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:54.895 "listen_address": { 00:15:54.895 "trtype": "TCP", 00:15:54.895 "adrfam": "IPv4", 00:15:54.895 "traddr": "10.0.0.2", 00:15:54.895 "trsvcid": "4420" 00:15:54.895 }, 00:15:54.895 "peer_address": { 00:15:54.895 "trtype": "TCP", 00:15:54.895 "adrfam": "IPv4", 00:15:54.895 "traddr": "10.0.0.1", 00:15:54.895 "trsvcid": "60422" 00:15:54.895 }, 00:15:54.895 "auth": { 00:15:54.895 "state": "completed", 00:15:54.895 "digest": "sha256", 00:15:54.895 "dhgroup": "ffdhe3072" 00:15:54.895 } 00:15:54.895 } 00:15:54.895 ]' 00:15:54.895 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.895 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.895 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.154 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.154 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.154 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.154 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.154 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.413 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:15:55.413 17:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.980 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.239 00:15:56.239 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.239 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.239 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.497 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.497 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.497 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.497 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.497 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.497 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.497 { 00:15:56.497 "cntlid": 23, 00:15:56.497 "qid": 0, 00:15:56.497 "state": "enabled", 00:15:56.497 "thread": "nvmf_tgt_poll_group_000", 00:15:56.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:56.497 "listen_address": { 00:15:56.497 "trtype": "TCP", 00:15:56.497 "adrfam": "IPv4", 00:15:56.497 "traddr": "10.0.0.2", 00:15:56.497 "trsvcid": "4420" 00:15:56.497 }, 00:15:56.497 "peer_address": { 00:15:56.497 "trtype": "TCP", 00:15:56.497 "adrfam": "IPv4", 00:15:56.497 "traddr": "10.0.0.1", 00:15:56.497 "trsvcid": "60454" 00:15:56.497 }, 00:15:56.497 "auth": { 00:15:56.497 "state": "completed", 00:15:56.497 "digest": "sha256", 00:15:56.497 "dhgroup": "ffdhe3072" 00:15:56.497 } 00:15:56.497 } 00:15:56.497 ]' 00:15:56.497 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.497 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.497 17:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.497 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:56.497 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.756 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.756 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.756 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.756 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:15:56.756 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:15:57.324 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.324 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:57.324 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.324 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.324 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.324 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.324 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.324 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:57.324 17:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.583 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.842 00:15:57.842 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.842 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.842 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.100 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.100 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.100 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.100 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.100 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.100 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.100 { 00:15:58.100 "cntlid": 25, 00:15:58.100 "qid": 0, 00:15:58.100 "state": "enabled", 00:15:58.100 "thread": "nvmf_tgt_poll_group_000", 00:15:58.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:58.100 "listen_address": { 00:15:58.100 "trtype": "TCP", 00:15:58.100 "adrfam": "IPv4", 00:15:58.100 "traddr": "10.0.0.2", 00:15:58.100 "trsvcid": "4420" 00:15:58.100 }, 00:15:58.100 "peer_address": { 00:15:58.101 "trtype": "TCP", 00:15:58.101 "adrfam": "IPv4", 00:15:58.101 "traddr": "10.0.0.1", 00:15:58.101 "trsvcid": "60490" 00:15:58.101 }, 00:15:58.101 "auth": { 00:15:58.101 "state": "completed", 00:15:58.101 "digest": "sha256", 00:15:58.101 "dhgroup": "ffdhe4096" 00:15:58.101 } 00:15:58.101 } 00:15:58.101 ]' 00:15:58.101 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.101 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.101 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.360 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.360 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.360 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.360 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.360 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.360 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:15:58.360 17:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:15:58.926 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.926 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:58.926 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.926 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.926 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.926 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.926 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.926 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.185 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.443 00:15:59.443 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.443 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.444 17:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.703 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.703 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.703 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.703 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.703 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.703 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.703 { 00:15:59.703 "cntlid": 27, 00:15:59.703 "qid": 0, 00:15:59.703 "state": "enabled", 00:15:59.703 "thread": "nvmf_tgt_poll_group_000", 00:15:59.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:59.703 "listen_address": { 00:15:59.703 "trtype": "TCP", 00:15:59.703 "adrfam": "IPv4", 00:15:59.703 "traddr": "10.0.0.2", 00:15:59.703 "trsvcid": "4420" 00:15:59.703 }, 00:15:59.703 "peer_address": { 00:15:59.703 "trtype": "TCP", 00:15:59.703 "adrfam": "IPv4", 00:15:59.703 "traddr": "10.0.0.1", 00:15:59.703 "trsvcid": "60526" 00:15:59.703 }, 00:15:59.703 "auth": { 00:15:59.703 "state": "completed", 00:15:59.703 "digest": "sha256", 00:15:59.703 "dhgroup": "ffdhe4096" 00:15:59.703 } 00:15:59.703 } 00:15:59.703 ]' 00:15:59.703 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.703 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.703 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.961 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.961 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.961 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.961 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.961 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.961 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:15:59.961 17:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:00.528 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.528 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:00.528 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.528 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.528 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.528 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.528 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:00.528 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.787 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.047 00:16:01.047 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.047 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.047 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.306 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.306 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.306 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.306 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.306 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.306 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.306 { 00:16:01.306 "cntlid": 29, 00:16:01.306 "qid": 0, 00:16:01.306 "state": "enabled", 00:16:01.306 "thread": "nvmf_tgt_poll_group_000", 00:16:01.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:01.306 "listen_address": { 00:16:01.306 "trtype": "TCP", 00:16:01.306 "adrfam": "IPv4", 00:16:01.306 "traddr": "10.0.0.2", 00:16:01.306 "trsvcid": "4420" 00:16:01.306 }, 00:16:01.306 "peer_address": { 00:16:01.306 "trtype": "TCP", 00:16:01.306 "adrfam": "IPv4", 00:16:01.306 "traddr": "10.0.0.1", 00:16:01.306 "trsvcid": "54378" 00:16:01.306 }, 00:16:01.306 "auth": { 00:16:01.306 "state": "completed", 00:16:01.306 "digest": "sha256", 00:16:01.306 "dhgroup": "ffdhe4096" 00:16:01.306 } 00:16:01.306 } 00:16:01.306 ]' 00:16:01.306 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.306 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.306 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.306 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.564 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.564 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.564 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.565 17:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.565 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:01.565 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:02.132 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.132 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:02.132 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.132 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.391 17:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.650 00:16:02.650 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.650 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.650 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.908 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.908 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.908 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.908 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.908 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.908 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.908 { 00:16:02.908 "cntlid": 31, 00:16:02.908 "qid": 0, 00:16:02.908 "state": "enabled", 00:16:02.908 "thread": "nvmf_tgt_poll_group_000", 00:16:02.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:02.908 "listen_address": { 00:16:02.908 "trtype": "TCP", 00:16:02.908 "adrfam": "IPv4", 00:16:02.908 "traddr": "10.0.0.2", 00:16:02.908 "trsvcid": "4420" 00:16:02.908 }, 00:16:02.908 "peer_address": { 00:16:02.908 "trtype": "TCP", 00:16:02.908 "adrfam": "IPv4", 00:16:02.908 "traddr": "10.0.0.1", 00:16:02.908 "trsvcid": "54422" 00:16:02.908 }, 00:16:02.908 "auth": { 00:16:02.908 "state": "completed", 00:16:02.908 "digest": "sha256", 00:16:02.908 "dhgroup": "ffdhe4096" 00:16:02.908 } 00:16:02.908 } 00:16:02.908 ]' 00:16:02.908 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.908 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.908 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.167 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.167 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.167 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.167 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.167 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.167 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:03.167 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:03.734 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.734 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:03.734 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.734 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.992 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.560 00:16:04.560 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.560 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.560 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.560 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.560 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.560 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.560 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.560 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.560 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.560 { 00:16:04.560 "cntlid": 33, 00:16:04.560 "qid": 0, 00:16:04.560 "state": "enabled", 00:16:04.560 "thread": "nvmf_tgt_poll_group_000", 00:16:04.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:04.560 "listen_address": { 00:16:04.560 "trtype": "TCP", 00:16:04.560 "adrfam": "IPv4", 00:16:04.560 "traddr": "10.0.0.2", 00:16:04.560 "trsvcid": "4420" 00:16:04.560 }, 00:16:04.560 "peer_address": { 00:16:04.560 "trtype": "TCP", 00:16:04.560 "adrfam": "IPv4", 00:16:04.560 "traddr": "10.0.0.1", 00:16:04.560 "trsvcid": "54446" 00:16:04.560 }, 00:16:04.560 "auth": { 00:16:04.560 "state": "completed", 00:16:04.560 "digest": "sha256", 00:16:04.560 "dhgroup": "ffdhe6144" 00:16:04.560 } 00:16:04.560 } 00:16:04.560 ]' 00:16:04.560 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.819 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.819 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.819 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.819 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.819 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.819 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.819 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.078 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:05.078 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:05.646 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.646 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:05.646 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.646 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.646 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.646 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.646 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:05.646 17:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.646 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.213 00:16:06.213 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.213 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.213 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.213 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.213 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.213 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.213 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.213 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.213 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.213 { 00:16:06.213 "cntlid": 35, 00:16:06.213 "qid": 0, 00:16:06.213 "state": "enabled", 00:16:06.213 "thread": "nvmf_tgt_poll_group_000", 00:16:06.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:06.213 "listen_address": { 00:16:06.213 "trtype": "TCP", 00:16:06.213 "adrfam": "IPv4", 00:16:06.213 "traddr": "10.0.0.2", 00:16:06.213 "trsvcid": "4420" 00:16:06.213 }, 00:16:06.213 "peer_address": { 00:16:06.213 "trtype": "TCP", 00:16:06.213 "adrfam": "IPv4", 00:16:06.213 "traddr": "10.0.0.1", 00:16:06.213 "trsvcid": "54474" 00:16:06.213 }, 00:16:06.213 "auth": { 00:16:06.213 "state": "completed", 00:16:06.213 "digest": "sha256", 00:16:06.213 "dhgroup": "ffdhe6144" 00:16:06.213 } 00:16:06.213 } 00:16:06.213 ]' 00:16:06.213 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.508 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.509 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.509 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.509 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.509 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.509 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.509 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.807 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:06.807 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:07.065 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.324 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.891 00:16:07.891 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.891 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.891 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.891 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.891 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.891 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.891 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.891 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.891 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.891 { 00:16:07.891 "cntlid": 37, 00:16:07.891 "qid": 0, 00:16:07.891 "state": "enabled", 00:16:07.891 "thread": "nvmf_tgt_poll_group_000", 00:16:07.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:07.891 "listen_address": { 00:16:07.891 "trtype": "TCP", 00:16:07.891 "adrfam": "IPv4", 00:16:07.891 "traddr": "10.0.0.2", 00:16:07.891 "trsvcid": "4420" 00:16:07.891 }, 00:16:07.891 "peer_address": { 00:16:07.891 "trtype": "TCP", 00:16:07.891 "adrfam": "IPv4", 00:16:07.891 "traddr": "10.0.0.1", 00:16:07.891 "trsvcid": "54514" 00:16:07.891 }, 00:16:07.891 "auth": { 00:16:07.891 "state": "completed", 00:16:07.891 "digest": "sha256", 00:16:07.891 "dhgroup": "ffdhe6144" 00:16:07.891 } 00:16:07.891 } 00:16:07.891 ]' 00:16:07.891 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.150 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.150 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.150 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.150 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.150 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.150 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.150 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.409 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:08.409 17:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:08.975 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.976 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.976 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:08.976 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.976 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.976 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.976 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.976 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.976 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.542 00:16:09.542 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.542 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.542 17:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.542 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.542 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.542 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.542 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.543 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.543 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.543 { 00:16:09.543 "cntlid": 39, 00:16:09.543 "qid": 0, 00:16:09.543 "state": "enabled", 00:16:09.543 "thread": "nvmf_tgt_poll_group_000", 00:16:09.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:09.543 "listen_address": { 00:16:09.543 "trtype": "TCP", 00:16:09.543 "adrfam": "IPv4", 00:16:09.543 "traddr": "10.0.0.2", 00:16:09.543 "trsvcid": "4420" 00:16:09.543 }, 00:16:09.543 "peer_address": { 00:16:09.543 "trtype": "TCP", 00:16:09.543 "adrfam": "IPv4", 00:16:09.543 "traddr": "10.0.0.1", 00:16:09.543 "trsvcid": "54552" 00:16:09.543 }, 00:16:09.543 "auth": { 00:16:09.543 "state": "completed", 00:16:09.543 "digest": "sha256", 00:16:09.543 "dhgroup": "ffdhe6144" 00:16:09.543 } 00:16:09.543 } 00:16:09.543 ]' 00:16:09.543 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.543 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.543 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.801 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.801 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.801 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.801 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.801 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.060 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:10.060 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:10.628 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.628 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:10.628 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.628 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.628 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.628 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.628 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.628 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.628 17:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.628 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.195 00:16:11.195 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.195 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.195 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.454 { 00:16:11.454 "cntlid": 41, 00:16:11.454 "qid": 0, 00:16:11.454 "state": "enabled", 00:16:11.454 "thread": "nvmf_tgt_poll_group_000", 00:16:11.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:11.454 "listen_address": { 00:16:11.454 "trtype": "TCP", 00:16:11.454 "adrfam": "IPv4", 00:16:11.454 "traddr": "10.0.0.2", 00:16:11.454 "trsvcid": "4420" 00:16:11.454 }, 00:16:11.454 "peer_address": { 00:16:11.454 "trtype": "TCP", 00:16:11.454 "adrfam": "IPv4", 00:16:11.454 "traddr": "10.0.0.1", 00:16:11.454 "trsvcid": "57626" 00:16:11.454 }, 00:16:11.454 "auth": { 00:16:11.454 "state": "completed", 00:16:11.454 "digest": "sha256", 00:16:11.454 "dhgroup": "ffdhe8192" 00:16:11.454 } 00:16:11.454 } 00:16:11.454 ]' 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.454 17:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.713 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:11.713 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:12.280 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.280 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:12.280 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.280 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.280 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.280 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.280 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:12.280 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.539 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.106 00:16:13.106 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.106 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.106 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.106 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.106 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.106 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.106 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.106 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.106 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.106 { 00:16:13.106 "cntlid": 43, 00:16:13.106 "qid": 0, 00:16:13.106 "state": "enabled", 00:16:13.106 "thread": "nvmf_tgt_poll_group_000", 00:16:13.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:13.106 "listen_address": { 00:16:13.106 "trtype": "TCP", 00:16:13.106 "adrfam": "IPv4", 00:16:13.106 "traddr": "10.0.0.2", 00:16:13.106 "trsvcid": "4420" 00:16:13.106 }, 00:16:13.106 "peer_address": { 00:16:13.106 "trtype": "TCP", 00:16:13.106 "adrfam": "IPv4", 00:16:13.106 "traddr": "10.0.0.1", 00:16:13.106 "trsvcid": "57650" 00:16:13.106 }, 00:16:13.106 "auth": { 00:16:13.106 "state": "completed", 00:16:13.106 "digest": "sha256", 00:16:13.106 "dhgroup": "ffdhe8192" 00:16:13.106 } 00:16:13.106 } 00:16:13.106 ]' 00:16:13.106 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.365 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.365 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.365 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:13.365 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.365 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.365 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.365 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.623 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:13.623 17:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:14.190 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.190 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:14.190 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.190 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.190 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.190 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.190 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:14.190 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.449 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.707 00:16:14.966 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.966 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.966 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.966 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.966 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.966 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.966 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.966 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.966 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.966 { 00:16:14.966 "cntlid": 45, 00:16:14.966 "qid": 0, 00:16:14.966 "state": "enabled", 00:16:14.966 "thread": "nvmf_tgt_poll_group_000", 00:16:14.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:14.966 "listen_address": { 00:16:14.966 "trtype": "TCP", 00:16:14.966 "adrfam": "IPv4", 00:16:14.966 "traddr": "10.0.0.2", 00:16:14.966 "trsvcid": "4420" 00:16:14.966 }, 00:16:14.966 "peer_address": { 00:16:14.966 "trtype": "TCP", 00:16:14.966 "adrfam": "IPv4", 00:16:14.966 "traddr": "10.0.0.1", 00:16:14.966 "trsvcid": "57692" 00:16:14.966 }, 00:16:14.966 "auth": { 00:16:14.966 "state": "completed", 00:16:14.966 "digest": "sha256", 00:16:14.966 "dhgroup": "ffdhe8192" 00:16:14.966 } 00:16:14.966 } 00:16:14.966 ]' 00:16:14.966 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.225 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.225 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.225 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.225 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.225 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.225 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.225 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.483 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:15.483 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:16.050 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.051 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.618 00:16:16.618 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.618 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.618 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.876 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.876 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.876 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.876 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.876 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.876 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.876 { 00:16:16.876 "cntlid": 47, 00:16:16.876 "qid": 0, 00:16:16.876 "state": "enabled", 00:16:16.876 "thread": "nvmf_tgt_poll_group_000", 00:16:16.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:16.876 "listen_address": { 00:16:16.876 "trtype": "TCP", 00:16:16.876 "adrfam": "IPv4", 00:16:16.876 "traddr": "10.0.0.2", 00:16:16.876 "trsvcid": "4420" 00:16:16.876 }, 00:16:16.876 "peer_address": { 00:16:16.876 "trtype": "TCP", 00:16:16.876 "adrfam": "IPv4", 00:16:16.876 "traddr": "10.0.0.1", 00:16:16.876 "trsvcid": "57714" 00:16:16.876 }, 00:16:16.876 "auth": { 00:16:16.876 "state": "completed", 00:16:16.876 "digest": "sha256", 00:16:16.876 "dhgroup": "ffdhe8192" 00:16:16.876 } 00:16:16.876 } 00:16:16.876 ]' 00:16:16.876 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.876 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.876 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.877 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:16.877 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.135 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.135 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.135 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.135 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:17.135 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:17.703 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.703 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:17.703 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.703 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.703 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.703 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:17.703 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.703 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.703 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.703 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.962 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.220 00:16:18.220 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.220 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.220 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.479 { 00:16:18.479 "cntlid": 49, 00:16:18.479 "qid": 0, 00:16:18.479 "state": "enabled", 00:16:18.479 "thread": "nvmf_tgt_poll_group_000", 00:16:18.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:18.479 "listen_address": { 00:16:18.479 "trtype": "TCP", 00:16:18.479 "adrfam": "IPv4", 00:16:18.479 "traddr": "10.0.0.2", 00:16:18.479 "trsvcid": "4420" 00:16:18.479 }, 00:16:18.479 "peer_address": { 00:16:18.479 "trtype": "TCP", 00:16:18.479 "adrfam": "IPv4", 00:16:18.479 "traddr": "10.0.0.1", 00:16:18.479 "trsvcid": "57736" 00:16:18.479 }, 00:16:18.479 "auth": { 00:16:18.479 "state": "completed", 00:16:18.479 "digest": "sha384", 00:16:18.479 "dhgroup": "null" 00:16:18.479 } 00:16:18.479 } 00:16:18.479 ]' 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.479 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.737 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:18.737 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:19.303 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.303 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:19.303 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.303 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.303 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.303 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.303 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:19.303 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.561 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.820 00:16:19.820 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.820 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.820 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.078 { 00:16:20.078 "cntlid": 51, 00:16:20.078 "qid": 0, 00:16:20.078 "state": "enabled", 00:16:20.078 "thread": "nvmf_tgt_poll_group_000", 00:16:20.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:20.078 "listen_address": { 00:16:20.078 "trtype": "TCP", 00:16:20.078 "adrfam": "IPv4", 00:16:20.078 "traddr": "10.0.0.2", 00:16:20.078 "trsvcid": "4420" 00:16:20.078 }, 00:16:20.078 "peer_address": { 00:16:20.078 "trtype": "TCP", 00:16:20.078 "adrfam": "IPv4", 00:16:20.078 "traddr": "10.0.0.1", 00:16:20.078 "trsvcid": "57760" 00:16:20.078 }, 00:16:20.078 "auth": { 00:16:20.078 "state": "completed", 00:16:20.078 "digest": "sha384", 00:16:20.078 "dhgroup": "null" 00:16:20.078 } 00:16:20.078 } 00:16:20.078 ]' 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.078 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.337 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:20.337 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:20.904 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.904 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:20.904 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.904 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.904 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.904 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.904 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.904 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.163 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.422 00:16:21.422 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.422 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.422 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.422 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.422 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.422 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.422 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.681 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.681 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.681 { 00:16:21.681 "cntlid": 53, 00:16:21.681 "qid": 0, 00:16:21.681 "state": "enabled", 00:16:21.681 "thread": "nvmf_tgt_poll_group_000", 00:16:21.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:21.681 "listen_address": { 00:16:21.681 "trtype": "TCP", 00:16:21.681 "adrfam": "IPv4", 00:16:21.681 "traddr": "10.0.0.2", 00:16:21.681 "trsvcid": "4420" 00:16:21.681 }, 00:16:21.681 "peer_address": { 00:16:21.681 "trtype": "TCP", 00:16:21.681 "adrfam": "IPv4", 00:16:21.681 "traddr": "10.0.0.1", 00:16:21.681 "trsvcid": "46070" 00:16:21.681 }, 00:16:21.681 "auth": { 00:16:21.681 "state": "completed", 00:16:21.681 "digest": "sha384", 00:16:21.681 "dhgroup": "null" 00:16:21.681 } 00:16:21.681 } 00:16:21.681 ]' 00:16:21.681 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.681 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.681 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.681 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.681 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.681 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.681 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.681 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.940 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:21.940 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:22.653 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.653 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:22.653 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.653 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.653 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.653 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.653 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:22.653 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.653 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.912 00:16:22.912 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.912 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.912 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.171 { 00:16:23.171 "cntlid": 55, 00:16:23.171 "qid": 0, 00:16:23.171 "state": "enabled", 00:16:23.171 "thread": "nvmf_tgt_poll_group_000", 00:16:23.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:23.171 "listen_address": { 00:16:23.171 "trtype": "TCP", 00:16:23.171 "adrfam": "IPv4", 00:16:23.171 "traddr": "10.0.0.2", 00:16:23.171 "trsvcid": "4420" 00:16:23.171 }, 00:16:23.171 "peer_address": { 00:16:23.171 "trtype": "TCP", 00:16:23.171 "adrfam": "IPv4", 00:16:23.171 "traddr": "10.0.0.1", 00:16:23.171 "trsvcid": "46096" 00:16:23.171 }, 00:16:23.171 "auth": { 00:16:23.171 "state": "completed", 00:16:23.171 "digest": "sha384", 00:16:23.171 "dhgroup": "null" 00:16:23.171 } 00:16:23.171 } 00:16:23.171 ]' 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.171 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.429 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:23.429 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:23.996 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.996 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:23.996 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.996 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.996 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.996 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.996 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.996 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:23.996 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.255 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.514 00:16:24.514 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.514 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.514 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.514 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.514 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.514 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.514 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.772 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.772 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.772 { 00:16:24.772 "cntlid": 57, 00:16:24.772 "qid": 0, 00:16:24.772 "state": "enabled", 00:16:24.772 "thread": "nvmf_tgt_poll_group_000", 00:16:24.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:24.772 "listen_address": { 00:16:24.772 "trtype": "TCP", 00:16:24.772 "adrfam": "IPv4", 00:16:24.772 "traddr": "10.0.0.2", 00:16:24.772 "trsvcid": "4420" 00:16:24.772 }, 00:16:24.772 "peer_address": { 00:16:24.772 "trtype": "TCP", 00:16:24.772 "adrfam": "IPv4", 00:16:24.772 "traddr": "10.0.0.1", 00:16:24.772 "trsvcid": "46128" 00:16:24.772 }, 00:16:24.772 "auth": { 00:16:24.772 "state": "completed", 00:16:24.772 "digest": "sha384", 00:16:24.772 "dhgroup": "ffdhe2048" 00:16:24.772 } 00:16:24.772 } 00:16:24.772 ]' 00:16:24.772 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.773 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.773 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.773 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.773 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.773 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.773 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.773 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.031 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:25.031 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:25.599 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.599 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:25.599 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.599 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.599 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.599 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.599 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.599 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.858 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.858 00:16:26.117 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.117 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.117 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.117 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.117 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.117 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.117 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.117 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.117 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.117 { 00:16:26.117 "cntlid": 59, 00:16:26.117 "qid": 0, 00:16:26.117 "state": "enabled", 00:16:26.117 "thread": "nvmf_tgt_poll_group_000", 00:16:26.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:26.117 "listen_address": { 00:16:26.117 "trtype": "TCP", 00:16:26.117 "adrfam": "IPv4", 00:16:26.117 "traddr": "10.0.0.2", 00:16:26.117 "trsvcid": "4420" 00:16:26.117 }, 00:16:26.117 "peer_address": { 00:16:26.117 "trtype": "TCP", 00:16:26.117 "adrfam": "IPv4", 00:16:26.117 "traddr": "10.0.0.1", 00:16:26.117 "trsvcid": "46154" 00:16:26.117 }, 00:16:26.117 "auth": { 00:16:26.117 "state": "completed", 00:16:26.117 "digest": "sha384", 00:16:26.117 "dhgroup": "ffdhe2048" 00:16:26.117 } 00:16:26.117 } 00:16:26.117 ]' 00:16:26.117 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.376 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.376 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.376 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.376 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.376 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.376 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.376 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.635 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:26.635 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:27.202 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.202 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:27.202 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.202 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.202 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.202 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.202 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.202 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.202 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:27.461 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.461 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.461 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.461 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.461 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.461 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.461 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.461 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.461 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.461 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.461 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.461 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.461 00:16:27.720 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.720 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.720 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.720 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.720 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.720 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.720 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.720 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.720 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.720 { 00:16:27.720 "cntlid": 61, 00:16:27.720 "qid": 0, 00:16:27.720 "state": "enabled", 00:16:27.720 "thread": "nvmf_tgt_poll_group_000", 00:16:27.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:27.720 "listen_address": { 00:16:27.720 "trtype": "TCP", 00:16:27.720 "adrfam": "IPv4", 00:16:27.720 "traddr": "10.0.0.2", 00:16:27.720 "trsvcid": "4420" 00:16:27.720 }, 00:16:27.720 "peer_address": { 00:16:27.720 "trtype": "TCP", 00:16:27.720 "adrfam": "IPv4", 00:16:27.720 "traddr": "10.0.0.1", 00:16:27.720 "trsvcid": "46176" 00:16:27.720 }, 00:16:27.720 "auth": { 00:16:27.720 "state": "completed", 00:16:27.720 "digest": "sha384", 00:16:27.720 "dhgroup": "ffdhe2048" 00:16:27.720 } 00:16:27.720 } 00:16:27.720 ]' 00:16:27.720 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.720 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.979 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.979 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.979 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.979 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.979 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.979 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.237 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:28.237 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.804 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.062 00:16:29.320 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.320 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.320 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.320 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.320 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.320 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.320 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.320 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.320 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.320 { 00:16:29.320 "cntlid": 63, 00:16:29.320 "qid": 0, 00:16:29.320 "state": "enabled", 00:16:29.320 "thread": "nvmf_tgt_poll_group_000", 00:16:29.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:29.321 "listen_address": { 00:16:29.321 "trtype": "TCP", 00:16:29.321 "adrfam": "IPv4", 00:16:29.321 "traddr": "10.0.0.2", 00:16:29.321 "trsvcid": "4420" 00:16:29.321 }, 00:16:29.321 "peer_address": { 00:16:29.321 "trtype": "TCP", 00:16:29.321 "adrfam": "IPv4", 00:16:29.321 "traddr": "10.0.0.1", 00:16:29.321 "trsvcid": "46204" 00:16:29.321 }, 00:16:29.321 "auth": { 00:16:29.321 "state": "completed", 00:16:29.321 "digest": "sha384", 00:16:29.321 "dhgroup": "ffdhe2048" 00:16:29.321 } 00:16:29.321 } 00:16:29.321 ]' 00:16:29.321 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.579 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.579 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.579 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.579 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.580 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.580 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.580 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.838 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:29.838 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:30.406 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.406 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:30.406 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.406 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.406 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.406 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.406 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.406 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:30.406 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:30.407 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:30.407 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.407 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.407 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.407 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:30.407 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.407 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.407 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.407 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.407 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.407 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.665 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.665 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.665 00:16:30.924 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.924 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.924 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.924 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.924 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.924 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.924 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.924 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.924 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.924 { 00:16:30.924 "cntlid": 65, 00:16:30.924 "qid": 0, 00:16:30.924 "state": "enabled", 00:16:30.924 "thread": "nvmf_tgt_poll_group_000", 00:16:30.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:30.924 "listen_address": { 00:16:30.924 "trtype": "TCP", 00:16:30.924 "adrfam": "IPv4", 00:16:30.924 "traddr": "10.0.0.2", 00:16:30.924 "trsvcid": "4420" 00:16:30.924 }, 00:16:30.924 "peer_address": { 00:16:30.924 "trtype": "TCP", 00:16:30.924 "adrfam": "IPv4", 00:16:30.924 "traddr": "10.0.0.1", 00:16:30.924 "trsvcid": "40552" 00:16:30.924 }, 00:16:30.924 "auth": { 00:16:30.924 "state": "completed", 00:16:30.924 "digest": "sha384", 00:16:30.924 "dhgroup": "ffdhe3072" 00:16:30.924 } 00:16:30.924 } 00:16:30.924 ]' 00:16:30.924 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.183 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.183 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.183 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.183 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.183 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.183 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.183 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.442 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:31.442 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.010 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.269 00:16:32.269 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.269 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.269 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.528 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.528 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.528 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.528 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.528 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.528 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.528 { 00:16:32.528 "cntlid": 67, 00:16:32.528 "qid": 0, 00:16:32.528 "state": "enabled", 00:16:32.528 "thread": "nvmf_tgt_poll_group_000", 00:16:32.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:32.528 "listen_address": { 00:16:32.528 "trtype": "TCP", 00:16:32.528 "adrfam": "IPv4", 00:16:32.528 "traddr": "10.0.0.2", 00:16:32.528 "trsvcid": "4420" 00:16:32.528 }, 00:16:32.528 "peer_address": { 00:16:32.528 "trtype": "TCP", 00:16:32.528 "adrfam": "IPv4", 00:16:32.528 "traddr": "10.0.0.1", 00:16:32.528 "trsvcid": "40578" 00:16:32.528 }, 00:16:32.528 "auth": { 00:16:32.528 "state": "completed", 00:16:32.528 "digest": "sha384", 00:16:32.528 "dhgroup": "ffdhe3072" 00:16:32.528 } 00:16:32.528 } 00:16:32.528 ]' 00:16:32.528 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.786 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.786 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.786 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.786 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.786 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.786 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.786 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.045 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:33.045 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:33.611 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.611 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:33.611 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.611 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.611 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.611 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.611 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.611 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.611 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:33.611 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.611 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.611 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.611 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.611 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.611 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.611 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.611 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.870 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.870 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.870 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.870 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.870 00:16:34.128 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.128 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.128 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.128 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.128 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.128 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.128 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.128 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.128 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.128 { 00:16:34.128 "cntlid": 69, 00:16:34.128 "qid": 0, 00:16:34.128 "state": "enabled", 00:16:34.128 "thread": "nvmf_tgt_poll_group_000", 00:16:34.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:34.128 "listen_address": { 00:16:34.128 "trtype": "TCP", 00:16:34.128 "adrfam": "IPv4", 00:16:34.128 "traddr": "10.0.0.2", 00:16:34.128 "trsvcid": "4420" 00:16:34.128 }, 00:16:34.128 "peer_address": { 00:16:34.128 "trtype": "TCP", 00:16:34.128 "adrfam": "IPv4", 00:16:34.128 "traddr": "10.0.0.1", 00:16:34.128 "trsvcid": "40606" 00:16:34.128 }, 00:16:34.128 "auth": { 00:16:34.128 "state": "completed", 00:16:34.128 "digest": "sha384", 00:16:34.128 "dhgroup": "ffdhe3072" 00:16:34.128 } 00:16:34.128 } 00:16:34.128 ]' 00:16:34.128 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.386 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.386 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.386 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.386 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.386 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.386 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.386 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.644 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:34.644 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.211 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.470 00:16:35.729 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.729 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.729 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.729 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.729 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.729 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.729 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.729 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.729 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.729 { 00:16:35.729 "cntlid": 71, 00:16:35.729 "qid": 0, 00:16:35.729 "state": "enabled", 00:16:35.729 "thread": "nvmf_tgt_poll_group_000", 00:16:35.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:35.729 "listen_address": { 00:16:35.729 "trtype": "TCP", 00:16:35.729 "adrfam": "IPv4", 00:16:35.729 "traddr": "10.0.0.2", 00:16:35.729 "trsvcid": "4420" 00:16:35.729 }, 00:16:35.729 "peer_address": { 00:16:35.729 "trtype": "TCP", 00:16:35.729 "adrfam": "IPv4", 00:16:35.729 "traddr": "10.0.0.1", 00:16:35.729 "trsvcid": "40636" 00:16:35.729 }, 00:16:35.729 "auth": { 00:16:35.729 "state": "completed", 00:16:35.729 "digest": "sha384", 00:16:35.729 "dhgroup": "ffdhe3072" 00:16:35.729 } 00:16:35.729 } 00:16:35.729 ]' 00:16:35.729 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.729 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.729 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.991 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.991 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.991 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.991 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.991 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.249 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:36.249 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:36.817 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.818 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.818 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.818 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.818 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.818 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.818 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.818 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.076 00:16:37.335 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.335 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.335 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.335 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.335 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.335 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.335 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.335 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.335 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.335 { 00:16:37.335 "cntlid": 73, 00:16:37.335 "qid": 0, 00:16:37.335 "state": "enabled", 00:16:37.335 "thread": "nvmf_tgt_poll_group_000", 00:16:37.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:37.335 "listen_address": { 00:16:37.335 "trtype": "TCP", 00:16:37.335 "adrfam": "IPv4", 00:16:37.335 "traddr": "10.0.0.2", 00:16:37.335 "trsvcid": "4420" 00:16:37.335 }, 00:16:37.335 "peer_address": { 00:16:37.335 "trtype": "TCP", 00:16:37.335 "adrfam": "IPv4", 00:16:37.335 "traddr": "10.0.0.1", 00:16:37.335 "trsvcid": "40652" 00:16:37.335 }, 00:16:37.335 "auth": { 00:16:37.335 "state": "completed", 00:16:37.335 "digest": "sha384", 00:16:37.335 "dhgroup": "ffdhe4096" 00:16:37.335 } 00:16:37.335 } 00:16:37.335 ]' 00:16:37.335 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.335 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.335 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.594 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.594 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.594 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.594 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.594 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.853 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:37.853 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:38.421 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.422 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.422 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.422 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.422 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.422 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.422 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.422 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.422 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.680 00:16:38.938 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.938 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.938 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.938 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.938 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.938 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.938 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.938 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.938 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.938 { 00:16:38.938 "cntlid": 75, 00:16:38.938 "qid": 0, 00:16:38.938 "state": "enabled", 00:16:38.938 "thread": "nvmf_tgt_poll_group_000", 00:16:38.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:38.938 "listen_address": { 00:16:38.938 "trtype": "TCP", 00:16:38.938 "adrfam": "IPv4", 00:16:38.938 "traddr": "10.0.0.2", 00:16:38.938 "trsvcid": "4420" 00:16:38.938 }, 00:16:38.938 "peer_address": { 00:16:38.938 "trtype": "TCP", 00:16:38.938 "adrfam": "IPv4", 00:16:38.938 "traddr": "10.0.0.1", 00:16:38.938 "trsvcid": "40678" 00:16:38.938 }, 00:16:38.938 "auth": { 00:16:38.938 "state": "completed", 00:16:38.938 "digest": "sha384", 00:16:38.938 "dhgroup": "ffdhe4096" 00:16:38.938 } 00:16:38.938 } 00:16:38.938 ]' 00:16:38.938 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.197 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.197 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.197 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.197 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.197 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.197 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.197 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.455 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:39.455 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.022 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.281 00:16:40.281 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.281 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.281 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.541 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.541 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.541 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.541 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.541 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.541 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.541 { 00:16:40.541 "cntlid": 77, 00:16:40.541 "qid": 0, 00:16:40.541 "state": "enabled", 00:16:40.541 "thread": "nvmf_tgt_poll_group_000", 00:16:40.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:40.541 "listen_address": { 00:16:40.541 "trtype": "TCP", 00:16:40.541 "adrfam": "IPv4", 00:16:40.541 "traddr": "10.0.0.2", 00:16:40.541 "trsvcid": "4420" 00:16:40.541 }, 00:16:40.541 "peer_address": { 00:16:40.541 "trtype": "TCP", 00:16:40.541 "adrfam": "IPv4", 00:16:40.541 "traddr": "10.0.0.1", 00:16:40.541 "trsvcid": "32900" 00:16:40.541 }, 00:16:40.541 "auth": { 00:16:40.541 "state": "completed", 00:16:40.541 "digest": "sha384", 00:16:40.541 "dhgroup": "ffdhe4096" 00:16:40.541 } 00:16:40.541 } 00:16:40.541 ]' 00:16:40.541 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.541 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.541 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.799 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.799 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.799 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.799 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.799 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.057 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:41.057 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:41.624 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.624 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:41.624 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.624 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.624 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.624 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.624 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:41.624 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:41.624 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:41.624 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.624 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.624 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.624 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.624 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.624 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:41.625 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.625 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.625 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.625 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.625 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.625 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.883 00:16:41.883 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.883 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.142 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.142 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.142 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.142 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.142 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.142 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.142 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.142 { 00:16:42.142 "cntlid": 79, 00:16:42.142 "qid": 0, 00:16:42.142 "state": "enabled", 00:16:42.142 "thread": "nvmf_tgt_poll_group_000", 00:16:42.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:42.142 "listen_address": { 00:16:42.142 "trtype": "TCP", 00:16:42.142 "adrfam": "IPv4", 00:16:42.142 "traddr": "10.0.0.2", 00:16:42.142 "trsvcid": "4420" 00:16:42.142 }, 00:16:42.142 "peer_address": { 00:16:42.142 "trtype": "TCP", 00:16:42.142 "adrfam": "IPv4", 00:16:42.142 "traddr": "10.0.0.1", 00:16:42.142 "trsvcid": "32924" 00:16:42.142 }, 00:16:42.142 "auth": { 00:16:42.142 "state": "completed", 00:16:42.142 "digest": "sha384", 00:16:42.142 "dhgroup": "ffdhe4096" 00:16:42.142 } 00:16:42.142 } 00:16:42.142 ]' 00:16:42.142 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.142 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.142 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.401 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.401 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.401 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.401 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.401 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.660 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:42.660 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.228 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.795 00:16:43.795 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.795 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.795 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.795 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.795 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.795 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.795 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.795 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.795 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.795 { 00:16:43.795 "cntlid": 81, 00:16:43.795 "qid": 0, 00:16:43.795 "state": "enabled", 00:16:43.795 "thread": "nvmf_tgt_poll_group_000", 00:16:43.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:43.796 "listen_address": { 00:16:43.796 "trtype": "TCP", 00:16:43.796 "adrfam": "IPv4", 00:16:43.796 "traddr": "10.0.0.2", 00:16:43.796 "trsvcid": "4420" 00:16:43.796 }, 00:16:43.796 "peer_address": { 00:16:43.796 "trtype": "TCP", 00:16:43.796 "adrfam": "IPv4", 00:16:43.796 "traddr": "10.0.0.1", 00:16:43.796 "trsvcid": "32938" 00:16:43.796 }, 00:16:43.796 "auth": { 00:16:43.796 "state": "completed", 00:16:43.796 "digest": "sha384", 00:16:43.796 "dhgroup": "ffdhe6144" 00:16:43.796 } 00:16:43.796 } 00:16:43.796 ]' 00:16:43.796 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.054 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.054 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.054 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.054 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.054 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.054 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.054 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.313 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:44.313 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.881 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.449 00:16:45.449 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.449 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.449 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.449 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.449 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.449 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.449 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.449 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.449 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.449 { 00:16:45.449 "cntlid": 83, 00:16:45.449 "qid": 0, 00:16:45.449 "state": "enabled", 00:16:45.449 "thread": "nvmf_tgt_poll_group_000", 00:16:45.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:45.449 "listen_address": { 00:16:45.449 "trtype": "TCP", 00:16:45.449 "adrfam": "IPv4", 00:16:45.449 "traddr": "10.0.0.2", 00:16:45.449 "trsvcid": "4420" 00:16:45.449 }, 00:16:45.449 "peer_address": { 00:16:45.449 "trtype": "TCP", 00:16:45.449 "adrfam": "IPv4", 00:16:45.449 "traddr": "10.0.0.1", 00:16:45.449 "trsvcid": "32968" 00:16:45.449 }, 00:16:45.449 "auth": { 00:16:45.449 "state": "completed", 00:16:45.449 "digest": "sha384", 00:16:45.449 "dhgroup": "ffdhe6144" 00:16:45.449 } 00:16:45.449 } 00:16:45.449 ]' 00:16:45.449 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.707 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.707 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.707 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.707 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.707 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.707 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.707 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.966 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:45.966 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:46.532 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.532 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:46.532 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.532 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.532 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.532 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.532 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.532 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.532 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.098 00:16:47.098 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.098 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.098 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.098 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.098 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.098 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.098 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.098 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.098 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.098 { 00:16:47.098 "cntlid": 85, 00:16:47.098 "qid": 0, 00:16:47.098 "state": "enabled", 00:16:47.098 "thread": "nvmf_tgt_poll_group_000", 00:16:47.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:47.098 "listen_address": { 00:16:47.098 "trtype": "TCP", 00:16:47.098 "adrfam": "IPv4", 00:16:47.098 "traddr": "10.0.0.2", 00:16:47.098 "trsvcid": "4420" 00:16:47.098 }, 00:16:47.098 "peer_address": { 00:16:47.098 "trtype": "TCP", 00:16:47.098 "adrfam": "IPv4", 00:16:47.098 "traddr": "10.0.0.1", 00:16:47.098 "trsvcid": "33008" 00:16:47.098 }, 00:16:47.098 "auth": { 00:16:47.098 "state": "completed", 00:16:47.098 "digest": "sha384", 00:16:47.098 "dhgroup": "ffdhe6144" 00:16:47.098 } 00:16:47.098 } 00:16:47.098 ]' 00:16:47.098 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.356 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.356 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.356 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.356 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.356 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.356 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.356 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.614 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:47.614 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.181 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.746 00:16:48.746 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.746 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.746 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.746 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.746 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.746 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.746 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.746 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.746 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.746 { 00:16:48.746 "cntlid": 87, 00:16:48.746 "qid": 0, 00:16:48.746 "state": "enabled", 00:16:48.746 "thread": "nvmf_tgt_poll_group_000", 00:16:48.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:48.746 "listen_address": { 00:16:48.746 "trtype": "TCP", 00:16:48.746 "adrfam": "IPv4", 00:16:48.746 "traddr": "10.0.0.2", 00:16:48.746 "trsvcid": "4420" 00:16:48.746 }, 00:16:48.746 "peer_address": { 00:16:48.746 "trtype": "TCP", 00:16:48.746 "adrfam": "IPv4", 00:16:48.746 "traddr": "10.0.0.1", 00:16:48.746 "trsvcid": "33022" 00:16:48.746 }, 00:16:48.746 "auth": { 00:16:48.746 "state": "completed", 00:16:48.746 "digest": "sha384", 00:16:48.746 "dhgroup": "ffdhe6144" 00:16:48.746 } 00:16:48.746 } 00:16:48.746 ]' 00:16:48.746 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.004 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.004 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.004 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.004 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.004 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.004 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.004 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.262 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:49.262 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.828 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.829 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.829 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.829 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.087 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.087 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.087 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.087 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.346 00:16:50.346 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.346 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.346 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.605 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.605 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.605 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.605 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.605 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.605 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.605 { 00:16:50.605 "cntlid": 89, 00:16:50.605 "qid": 0, 00:16:50.605 "state": "enabled", 00:16:50.605 "thread": "nvmf_tgt_poll_group_000", 00:16:50.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:50.605 "listen_address": { 00:16:50.605 "trtype": "TCP", 00:16:50.605 "adrfam": "IPv4", 00:16:50.605 "traddr": "10.0.0.2", 00:16:50.605 "trsvcid": "4420" 00:16:50.605 }, 00:16:50.605 "peer_address": { 00:16:50.605 "trtype": "TCP", 00:16:50.605 "adrfam": "IPv4", 00:16:50.605 "traddr": "10.0.0.1", 00:16:50.605 "trsvcid": "33040" 00:16:50.605 }, 00:16:50.605 "auth": { 00:16:50.605 "state": "completed", 00:16:50.605 "digest": "sha384", 00:16:50.605 "dhgroup": "ffdhe8192" 00:16:50.605 } 00:16:50.605 } 00:16:50.605 ]' 00:16:50.605 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.605 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.605 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.863 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.863 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.863 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.863 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.863 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.863 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:50.863 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:51.430 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.430 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:51.430 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.430 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.430 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.430 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.430 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:51.430 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.689 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.257 00:16:52.257 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.257 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.257 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.516 { 00:16:52.516 "cntlid": 91, 00:16:52.516 "qid": 0, 00:16:52.516 "state": "enabled", 00:16:52.516 "thread": "nvmf_tgt_poll_group_000", 00:16:52.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:52.516 "listen_address": { 00:16:52.516 "trtype": "TCP", 00:16:52.516 "adrfam": "IPv4", 00:16:52.516 "traddr": "10.0.0.2", 00:16:52.516 "trsvcid": "4420" 00:16:52.516 }, 00:16:52.516 "peer_address": { 00:16:52.516 "trtype": "TCP", 00:16:52.516 "adrfam": "IPv4", 00:16:52.516 "traddr": "10.0.0.1", 00:16:52.516 "trsvcid": "44456" 00:16:52.516 }, 00:16:52.516 "auth": { 00:16:52.516 "state": "completed", 00:16:52.516 "digest": "sha384", 00:16:52.516 "dhgroup": "ffdhe8192" 00:16:52.516 } 00:16:52.516 } 00:16:52.516 ]' 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.516 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.775 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:52.775 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:53.343 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.343 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:53.343 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.343 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.343 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.343 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.343 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:53.343 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.602 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.170 00:16:54.170 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.170 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.170 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.170 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.170 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.170 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.170 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.170 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.170 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.170 { 00:16:54.170 "cntlid": 93, 00:16:54.170 "qid": 0, 00:16:54.170 "state": "enabled", 00:16:54.170 "thread": "nvmf_tgt_poll_group_000", 00:16:54.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:54.170 "listen_address": { 00:16:54.170 "trtype": "TCP", 00:16:54.170 "adrfam": "IPv4", 00:16:54.170 "traddr": "10.0.0.2", 00:16:54.170 "trsvcid": "4420" 00:16:54.170 }, 00:16:54.170 "peer_address": { 00:16:54.170 "trtype": "TCP", 00:16:54.170 "adrfam": "IPv4", 00:16:54.170 "traddr": "10.0.0.1", 00:16:54.170 "trsvcid": "44480" 00:16:54.170 }, 00:16:54.170 "auth": { 00:16:54.170 "state": "completed", 00:16:54.170 "digest": "sha384", 00:16:54.170 "dhgroup": "ffdhe8192" 00:16:54.170 } 00:16:54.170 } 00:16:54.170 ]' 00:16:54.170 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.429 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.429 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.429 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.429 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.429 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.429 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.429 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.687 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:54.687 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.254 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.821 00:16:55.821 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.821 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.821 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.079 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.079 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.079 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.079 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.079 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.079 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.079 { 00:16:56.079 "cntlid": 95, 00:16:56.080 "qid": 0, 00:16:56.080 "state": "enabled", 00:16:56.080 "thread": "nvmf_tgt_poll_group_000", 00:16:56.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:56.080 "listen_address": { 00:16:56.080 "trtype": "TCP", 00:16:56.080 "adrfam": "IPv4", 00:16:56.080 "traddr": "10.0.0.2", 00:16:56.080 "trsvcid": "4420" 00:16:56.080 }, 00:16:56.080 "peer_address": { 00:16:56.080 "trtype": "TCP", 00:16:56.080 "adrfam": "IPv4", 00:16:56.080 "traddr": "10.0.0.1", 00:16:56.080 "trsvcid": "44494" 00:16:56.080 }, 00:16:56.080 "auth": { 00:16:56.080 "state": "completed", 00:16:56.080 "digest": "sha384", 00:16:56.080 "dhgroup": "ffdhe8192" 00:16:56.080 } 00:16:56.080 } 00:16:56.080 ]' 00:16:56.080 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.080 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.080 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.080 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.080 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.080 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.080 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.080 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.338 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:56.338 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:16:56.905 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.905 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:56.906 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.906 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.906 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.906 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:56.906 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.906 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.906 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:56.906 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.164 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:57.165 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.165 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.165 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:57.165 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.165 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.165 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.165 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.165 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.165 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.165 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.165 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.165 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.424 00:16:57.424 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.424 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.424 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.682 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.682 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.682 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.682 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.683 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.683 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.683 { 00:16:57.683 "cntlid": 97, 00:16:57.683 "qid": 0, 00:16:57.683 "state": "enabled", 00:16:57.683 "thread": "nvmf_tgt_poll_group_000", 00:16:57.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:57.683 "listen_address": { 00:16:57.683 "trtype": "TCP", 00:16:57.683 "adrfam": "IPv4", 00:16:57.683 "traddr": "10.0.0.2", 00:16:57.683 "trsvcid": "4420" 00:16:57.683 }, 00:16:57.683 "peer_address": { 00:16:57.683 "trtype": "TCP", 00:16:57.683 "adrfam": "IPv4", 00:16:57.683 "traddr": "10.0.0.1", 00:16:57.683 "trsvcid": "44500" 00:16:57.683 }, 00:16:57.683 "auth": { 00:16:57.683 "state": "completed", 00:16:57.683 "digest": "sha512", 00:16:57.683 "dhgroup": "null" 00:16:57.683 } 00:16:57.683 } 00:16:57.683 ]' 00:16:57.683 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.683 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.683 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.683 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:57.683 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.683 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.683 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.683 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.941 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:57.941 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:16:58.509 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.509 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:58.509 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.509 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.509 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.509 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.509 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:58.509 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.772 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.032 00:16:59.032 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.032 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.032 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.032 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.032 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.032 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.032 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.032 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.032 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.032 { 00:16:59.032 "cntlid": 99, 00:16:59.032 "qid": 0, 00:16:59.032 "state": "enabled", 00:16:59.032 "thread": "nvmf_tgt_poll_group_000", 00:16:59.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:59.033 "listen_address": { 00:16:59.033 "trtype": "TCP", 00:16:59.033 "adrfam": "IPv4", 00:16:59.033 "traddr": "10.0.0.2", 00:16:59.033 "trsvcid": "4420" 00:16:59.033 }, 00:16:59.033 "peer_address": { 00:16:59.033 "trtype": "TCP", 00:16:59.033 "adrfam": "IPv4", 00:16:59.033 "traddr": "10.0.0.1", 00:16:59.033 "trsvcid": "44530" 00:16:59.033 }, 00:16:59.033 "auth": { 00:16:59.033 "state": "completed", 00:16:59.033 "digest": "sha512", 00:16:59.033 "dhgroup": "null" 00:16:59.033 } 00:16:59.033 } 00:16:59.033 ]' 00:16:59.033 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.291 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.291 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.291 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:59.291 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.291 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.291 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.291 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.549 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:16:59.550 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:17:00.116 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.117 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.375 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.375 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.375 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.375 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.375 00:17:00.375 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.375 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.375 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.634 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.634 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.634 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.634 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.634 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.634 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.634 { 00:17:00.634 "cntlid": 101, 00:17:00.634 "qid": 0, 00:17:00.634 "state": "enabled", 00:17:00.634 "thread": "nvmf_tgt_poll_group_000", 00:17:00.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:00.634 "listen_address": { 00:17:00.634 "trtype": "TCP", 00:17:00.634 "adrfam": "IPv4", 00:17:00.634 "traddr": "10.0.0.2", 00:17:00.634 "trsvcid": "4420" 00:17:00.634 }, 00:17:00.634 "peer_address": { 00:17:00.634 "trtype": "TCP", 00:17:00.634 "adrfam": "IPv4", 00:17:00.634 "traddr": "10.0.0.1", 00:17:00.634 "trsvcid": "35686" 00:17:00.634 }, 00:17:00.634 "auth": { 00:17:00.634 "state": "completed", 00:17:00.634 "digest": "sha512", 00:17:00.634 "dhgroup": "null" 00:17:00.634 } 00:17:00.634 } 00:17:00.634 ]' 00:17:00.634 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.634 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.892 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.892 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:00.892 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.892 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.892 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.892 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.150 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:17:01.150 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.717 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.976 00:17:01.976 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.976 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.976 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.234 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.234 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.234 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.234 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.234 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.234 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.234 { 00:17:02.234 "cntlid": 103, 00:17:02.234 "qid": 0, 00:17:02.234 "state": "enabled", 00:17:02.234 "thread": "nvmf_tgt_poll_group_000", 00:17:02.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:02.234 "listen_address": { 00:17:02.234 "trtype": "TCP", 00:17:02.234 "adrfam": "IPv4", 00:17:02.234 "traddr": "10.0.0.2", 00:17:02.234 "trsvcid": "4420" 00:17:02.234 }, 00:17:02.234 "peer_address": { 00:17:02.234 "trtype": "TCP", 00:17:02.234 "adrfam": "IPv4", 00:17:02.234 "traddr": "10.0.0.1", 00:17:02.234 "trsvcid": "35720" 00:17:02.234 }, 00:17:02.234 "auth": { 00:17:02.234 "state": "completed", 00:17:02.234 "digest": "sha512", 00:17:02.234 "dhgroup": "null" 00:17:02.234 } 00:17:02.234 } 00:17:02.234 ]' 00:17:02.234 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.234 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.234 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.493 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:02.493 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.493 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.493 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.493 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.751 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:02.751 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.317 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.576 00:17:03.576 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.576 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.576 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.835 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.835 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.835 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.835 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.835 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.835 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.835 { 00:17:03.835 "cntlid": 105, 00:17:03.835 "qid": 0, 00:17:03.835 "state": "enabled", 00:17:03.835 "thread": "nvmf_tgt_poll_group_000", 00:17:03.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:03.835 "listen_address": { 00:17:03.835 "trtype": "TCP", 00:17:03.835 "adrfam": "IPv4", 00:17:03.835 "traddr": "10.0.0.2", 00:17:03.835 "trsvcid": "4420" 00:17:03.835 }, 00:17:03.835 "peer_address": { 00:17:03.835 "trtype": "TCP", 00:17:03.835 "adrfam": "IPv4", 00:17:03.835 "traddr": "10.0.0.1", 00:17:03.835 "trsvcid": "35760" 00:17:03.835 }, 00:17:03.835 "auth": { 00:17:03.835 "state": "completed", 00:17:03.835 "digest": "sha512", 00:17:03.835 "dhgroup": "ffdhe2048" 00:17:03.835 } 00:17:03.835 } 00:17:03.835 ]' 00:17:03.835 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.835 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.835 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.093 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:04.093 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.093 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.093 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.093 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.351 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:17:04.351 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.917 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.175 00:17:05.175 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.175 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.175 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.433 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.433 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.433 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.433 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.433 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.433 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.433 { 00:17:05.433 "cntlid": 107, 00:17:05.433 "qid": 0, 00:17:05.433 "state": "enabled", 00:17:05.433 "thread": "nvmf_tgt_poll_group_000", 00:17:05.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:05.433 "listen_address": { 00:17:05.433 "trtype": "TCP", 00:17:05.433 "adrfam": "IPv4", 00:17:05.433 "traddr": "10.0.0.2", 00:17:05.433 "trsvcid": "4420" 00:17:05.433 }, 00:17:05.433 "peer_address": { 00:17:05.433 "trtype": "TCP", 00:17:05.433 "adrfam": "IPv4", 00:17:05.433 "traddr": "10.0.0.1", 00:17:05.433 "trsvcid": "35788" 00:17:05.433 }, 00:17:05.433 "auth": { 00:17:05.433 "state": "completed", 00:17:05.433 "digest": "sha512", 00:17:05.433 "dhgroup": "ffdhe2048" 00:17:05.433 } 00:17:05.433 } 00:17:05.433 ]' 00:17:05.433 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.433 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.433 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.692 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.692 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.692 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.692 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.692 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.950 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:17:05.950 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:17:06.516 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.516 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:06.516 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.516 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.516 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.516 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.516 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:06.516 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:06.516 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:06.516 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.516 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.516 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:06.516 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.516 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.516 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.516 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.516 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.517 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.517 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.517 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.517 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.775 00:17:06.775 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.775 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.775 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.033 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.033 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.033 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.033 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.033 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.033 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.033 { 00:17:07.033 "cntlid": 109, 00:17:07.033 "qid": 0, 00:17:07.033 "state": "enabled", 00:17:07.033 "thread": "nvmf_tgt_poll_group_000", 00:17:07.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:07.034 "listen_address": { 00:17:07.034 "trtype": "TCP", 00:17:07.034 "adrfam": "IPv4", 00:17:07.034 "traddr": "10.0.0.2", 00:17:07.034 "trsvcid": "4420" 00:17:07.034 }, 00:17:07.034 "peer_address": { 00:17:07.034 "trtype": "TCP", 00:17:07.034 "adrfam": "IPv4", 00:17:07.034 "traddr": "10.0.0.1", 00:17:07.034 "trsvcid": "35826" 00:17:07.034 }, 00:17:07.034 "auth": { 00:17:07.034 "state": "completed", 00:17:07.034 "digest": "sha512", 00:17:07.034 "dhgroup": "ffdhe2048" 00:17:07.034 } 00:17:07.034 } 00:17:07.034 ]' 00:17:07.034 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.034 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.034 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.291 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.291 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.291 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.291 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.291 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.291 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:17:07.291 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:17:07.858 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.858 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:07.858 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.858 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.858 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.858 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.858 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.858 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.116 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:08.116 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.116 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:08.116 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:08.116 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.116 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.117 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:08.117 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.117 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.117 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.117 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.117 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.117 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.375 00:17:08.375 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.375 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.375 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.633 { 00:17:08.633 "cntlid": 111, 00:17:08.633 "qid": 0, 00:17:08.633 "state": "enabled", 00:17:08.633 "thread": "nvmf_tgt_poll_group_000", 00:17:08.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:08.633 "listen_address": { 00:17:08.633 "trtype": "TCP", 00:17:08.633 "adrfam": "IPv4", 00:17:08.633 "traddr": "10.0.0.2", 00:17:08.633 "trsvcid": "4420" 00:17:08.633 }, 00:17:08.633 "peer_address": { 00:17:08.633 "trtype": "TCP", 00:17:08.633 "adrfam": "IPv4", 00:17:08.633 "traddr": "10.0.0.1", 00:17:08.633 "trsvcid": "35844" 00:17:08.633 }, 00:17:08.633 "auth": { 00:17:08.633 "state": "completed", 00:17:08.633 "digest": "sha512", 00:17:08.633 "dhgroup": "ffdhe2048" 00:17:08.633 } 00:17:08.633 } 00:17:08.633 ]' 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.633 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.891 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:08.891 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:09.457 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.457 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:09.457 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.457 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.457 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.457 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.457 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.457 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.457 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.717 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.975 00:17:09.975 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.975 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.975 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.234 { 00:17:10.234 "cntlid": 113, 00:17:10.234 "qid": 0, 00:17:10.234 "state": "enabled", 00:17:10.234 "thread": "nvmf_tgt_poll_group_000", 00:17:10.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:10.234 "listen_address": { 00:17:10.234 "trtype": "TCP", 00:17:10.234 "adrfam": "IPv4", 00:17:10.234 "traddr": "10.0.0.2", 00:17:10.234 "trsvcid": "4420" 00:17:10.234 }, 00:17:10.234 "peer_address": { 00:17:10.234 "trtype": "TCP", 00:17:10.234 "adrfam": "IPv4", 00:17:10.234 "traddr": "10.0.0.1", 00:17:10.234 "trsvcid": "35872" 00:17:10.234 }, 00:17:10.234 "auth": { 00:17:10.234 "state": "completed", 00:17:10.234 "digest": "sha512", 00:17:10.234 "dhgroup": "ffdhe3072" 00:17:10.234 } 00:17:10.234 } 00:17:10.234 ]' 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.234 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.493 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:17:10.493 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:17:11.059 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.059 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:11.059 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.059 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.059 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.059 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.059 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.059 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.318 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.576 00:17:11.576 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.576 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.576 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.834 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.834 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.834 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.834 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.834 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.834 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.834 { 00:17:11.834 "cntlid": 115, 00:17:11.834 "qid": 0, 00:17:11.834 "state": "enabled", 00:17:11.834 "thread": "nvmf_tgt_poll_group_000", 00:17:11.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:11.834 "listen_address": { 00:17:11.834 "trtype": "TCP", 00:17:11.834 "adrfam": "IPv4", 00:17:11.834 "traddr": "10.0.0.2", 00:17:11.834 "trsvcid": "4420" 00:17:11.834 }, 00:17:11.834 "peer_address": { 00:17:11.834 "trtype": "TCP", 00:17:11.834 "adrfam": "IPv4", 00:17:11.834 "traddr": "10.0.0.1", 00:17:11.834 "trsvcid": "57460" 00:17:11.834 }, 00:17:11.834 "auth": { 00:17:11.834 "state": "completed", 00:17:11.834 "digest": "sha512", 00:17:11.834 "dhgroup": "ffdhe3072" 00:17:11.834 } 00:17:11.834 } 00:17:11.834 ]' 00:17:11.834 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.834 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.834 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.835 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.835 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.835 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.835 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.835 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.093 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:17:12.093 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:17:12.660 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.660 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:12.660 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.660 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.660 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.660 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.660 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.660 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.919 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.177 00:17:13.177 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.177 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.177 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.436 { 00:17:13.436 "cntlid": 117, 00:17:13.436 "qid": 0, 00:17:13.436 "state": "enabled", 00:17:13.436 "thread": "nvmf_tgt_poll_group_000", 00:17:13.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:13.436 "listen_address": { 00:17:13.436 "trtype": "TCP", 00:17:13.436 "adrfam": "IPv4", 00:17:13.436 "traddr": "10.0.0.2", 00:17:13.436 "trsvcid": "4420" 00:17:13.436 }, 00:17:13.436 "peer_address": { 00:17:13.436 "trtype": "TCP", 00:17:13.436 "adrfam": "IPv4", 00:17:13.436 "traddr": "10.0.0.1", 00:17:13.436 "trsvcid": "57498" 00:17:13.436 }, 00:17:13.436 "auth": { 00:17:13.436 "state": "completed", 00:17:13.436 "digest": "sha512", 00:17:13.436 "dhgroup": "ffdhe3072" 00:17:13.436 } 00:17:13.436 } 00:17:13.436 ]' 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.436 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.694 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:17:13.694 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:17:14.261 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.261 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:14.261 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.261 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.261 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.261 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.261 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.261 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.520 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.779 00:17:14.779 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.779 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.779 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.037 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.037 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.037 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.037 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.037 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.037 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.037 { 00:17:15.037 "cntlid": 119, 00:17:15.037 "qid": 0, 00:17:15.037 "state": "enabled", 00:17:15.037 "thread": "nvmf_tgt_poll_group_000", 00:17:15.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:15.037 "listen_address": { 00:17:15.037 "trtype": "TCP", 00:17:15.037 "adrfam": "IPv4", 00:17:15.038 "traddr": "10.0.0.2", 00:17:15.038 "trsvcid": "4420" 00:17:15.038 }, 00:17:15.038 "peer_address": { 00:17:15.038 "trtype": "TCP", 00:17:15.038 "adrfam": "IPv4", 00:17:15.038 "traddr": "10.0.0.1", 00:17:15.038 "trsvcid": "57530" 00:17:15.038 }, 00:17:15.038 "auth": { 00:17:15.038 "state": "completed", 00:17:15.038 "digest": "sha512", 00:17:15.038 "dhgroup": "ffdhe3072" 00:17:15.038 } 00:17:15.038 } 00:17:15.038 ]' 00:17:15.038 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.038 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.038 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.038 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.038 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.038 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.038 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.038 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.296 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:15.296 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:15.864 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.864 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:15.864 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.864 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.864 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.864 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.864 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.864 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:15.864 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.123 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.381 00:17:16.381 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.381 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.381 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.640 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.640 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.640 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.640 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.640 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.640 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.640 { 00:17:16.640 "cntlid": 121, 00:17:16.640 "qid": 0, 00:17:16.640 "state": "enabled", 00:17:16.640 "thread": "nvmf_tgt_poll_group_000", 00:17:16.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:16.640 "listen_address": { 00:17:16.640 "trtype": "TCP", 00:17:16.640 "adrfam": "IPv4", 00:17:16.640 "traddr": "10.0.0.2", 00:17:16.640 "trsvcid": "4420" 00:17:16.640 }, 00:17:16.640 "peer_address": { 00:17:16.640 "trtype": "TCP", 00:17:16.640 "adrfam": "IPv4", 00:17:16.640 "traddr": "10.0.0.1", 00:17:16.640 "trsvcid": "57556" 00:17:16.640 }, 00:17:16.640 "auth": { 00:17:16.640 "state": "completed", 00:17:16.640 "digest": "sha512", 00:17:16.640 "dhgroup": "ffdhe4096" 00:17:16.640 } 00:17:16.640 } 00:17:16.640 ]' 00:17:16.640 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.640 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.640 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.640 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:16.640 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.640 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.640 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.640 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.899 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:17:16.899 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:17:17.467 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.467 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:17.467 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.467 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.467 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.467 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.467 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.467 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.726 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.984 00:17:17.984 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.984 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.984 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.243 { 00:17:18.243 "cntlid": 123, 00:17:18.243 "qid": 0, 00:17:18.243 "state": "enabled", 00:17:18.243 "thread": "nvmf_tgt_poll_group_000", 00:17:18.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:18.243 "listen_address": { 00:17:18.243 "trtype": "TCP", 00:17:18.243 "adrfam": "IPv4", 00:17:18.243 "traddr": "10.0.0.2", 00:17:18.243 "trsvcid": "4420" 00:17:18.243 }, 00:17:18.243 "peer_address": { 00:17:18.243 "trtype": "TCP", 00:17:18.243 "adrfam": "IPv4", 00:17:18.243 "traddr": "10.0.0.1", 00:17:18.243 "trsvcid": "57592" 00:17:18.243 }, 00:17:18.243 "auth": { 00:17:18.243 "state": "completed", 00:17:18.243 "digest": "sha512", 00:17:18.243 "dhgroup": "ffdhe4096" 00:17:18.243 } 00:17:18.243 } 00:17:18.243 ]' 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.243 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.502 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:17:18.502 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:17:19.070 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.070 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:19.070 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.070 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.070 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.070 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.070 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.070 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.329 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.587 00:17:19.587 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.587 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.587 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.846 { 00:17:19.846 "cntlid": 125, 00:17:19.846 "qid": 0, 00:17:19.846 "state": "enabled", 00:17:19.846 "thread": "nvmf_tgt_poll_group_000", 00:17:19.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:19.846 "listen_address": { 00:17:19.846 "trtype": "TCP", 00:17:19.846 "adrfam": "IPv4", 00:17:19.846 "traddr": "10.0.0.2", 00:17:19.846 "trsvcid": "4420" 00:17:19.846 }, 00:17:19.846 "peer_address": { 00:17:19.846 "trtype": "TCP", 00:17:19.846 "adrfam": "IPv4", 00:17:19.846 "traddr": "10.0.0.1", 00:17:19.846 "trsvcid": "57620" 00:17:19.846 }, 00:17:19.846 "auth": { 00:17:19.846 "state": "completed", 00:17:19.846 "digest": "sha512", 00:17:19.846 "dhgroup": "ffdhe4096" 00:17:19.846 } 00:17:19.846 } 00:17:19.846 ]' 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.846 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.105 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:17:20.105 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:17:20.671 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.672 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:20.672 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.672 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.672 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.672 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.672 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:20.672 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.930 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.189 00:17:21.189 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.189 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.189 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.462 { 00:17:21.462 "cntlid": 127, 00:17:21.462 "qid": 0, 00:17:21.462 "state": "enabled", 00:17:21.462 "thread": "nvmf_tgt_poll_group_000", 00:17:21.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:21.462 "listen_address": { 00:17:21.462 "trtype": "TCP", 00:17:21.462 "adrfam": "IPv4", 00:17:21.462 "traddr": "10.0.0.2", 00:17:21.462 "trsvcid": "4420" 00:17:21.462 }, 00:17:21.462 "peer_address": { 00:17:21.462 "trtype": "TCP", 00:17:21.462 "adrfam": "IPv4", 00:17:21.462 "traddr": "10.0.0.1", 00:17:21.462 "trsvcid": "52500" 00:17:21.462 }, 00:17:21.462 "auth": { 00:17:21.462 "state": "completed", 00:17:21.462 "digest": "sha512", 00:17:21.462 "dhgroup": "ffdhe4096" 00:17:21.462 } 00:17:21.462 } 00:17:21.462 ]' 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.462 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.807 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:21.807 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.447 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.741 00:17:22.741 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.741 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.741 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.999 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.999 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.999 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.000 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.000 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.000 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.000 { 00:17:23.000 "cntlid": 129, 00:17:23.000 "qid": 0, 00:17:23.000 "state": "enabled", 00:17:23.000 "thread": "nvmf_tgt_poll_group_000", 00:17:23.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:23.000 "listen_address": { 00:17:23.000 "trtype": "TCP", 00:17:23.000 "adrfam": "IPv4", 00:17:23.000 "traddr": "10.0.0.2", 00:17:23.000 "trsvcid": "4420" 00:17:23.000 }, 00:17:23.000 "peer_address": { 00:17:23.000 "trtype": "TCP", 00:17:23.000 "adrfam": "IPv4", 00:17:23.000 "traddr": "10.0.0.1", 00:17:23.000 "trsvcid": "52538" 00:17:23.000 }, 00:17:23.000 "auth": { 00:17:23.000 "state": "completed", 00:17:23.000 "digest": "sha512", 00:17:23.000 "dhgroup": "ffdhe6144" 00:17:23.000 } 00:17:23.000 } 00:17:23.000 ]' 00:17:23.000 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.000 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.000 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.000 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:23.000 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.258 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.258 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.258 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.258 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:17:23.258 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:17:23.826 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.826 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:23.826 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.826 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.826 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.826 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.826 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:23.826 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.085 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.652 00:17:24.652 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.652 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.652 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.652 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.652 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.652 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.652 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.652 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.652 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.652 { 00:17:24.652 "cntlid": 131, 00:17:24.652 "qid": 0, 00:17:24.652 "state": "enabled", 00:17:24.652 "thread": "nvmf_tgt_poll_group_000", 00:17:24.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:24.652 "listen_address": { 00:17:24.652 "trtype": "TCP", 00:17:24.652 "adrfam": "IPv4", 00:17:24.652 "traddr": "10.0.0.2", 00:17:24.652 "trsvcid": "4420" 00:17:24.652 }, 00:17:24.652 "peer_address": { 00:17:24.652 "trtype": "TCP", 00:17:24.652 "adrfam": "IPv4", 00:17:24.652 "traddr": "10.0.0.1", 00:17:24.652 "trsvcid": "52554" 00:17:24.652 }, 00:17:24.652 "auth": { 00:17:24.652 "state": "completed", 00:17:24.652 "digest": "sha512", 00:17:24.652 "dhgroup": "ffdhe6144" 00:17:24.652 } 00:17:24.652 } 00:17:24.652 ]' 00:17:24.652 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.652 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.652 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.652 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:24.911 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.911 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.911 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.911 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.169 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:17:25.169 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:17:25.737 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.737 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.305 00:17:26.305 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.305 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.305 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.305 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.305 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.305 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.305 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.305 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.305 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.305 { 00:17:26.305 "cntlid": 133, 00:17:26.305 "qid": 0, 00:17:26.305 "state": "enabled", 00:17:26.305 "thread": "nvmf_tgt_poll_group_000", 00:17:26.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:26.305 "listen_address": { 00:17:26.305 "trtype": "TCP", 00:17:26.305 "adrfam": "IPv4", 00:17:26.305 "traddr": "10.0.0.2", 00:17:26.305 "trsvcid": "4420" 00:17:26.305 }, 00:17:26.305 "peer_address": { 00:17:26.305 "trtype": "TCP", 00:17:26.305 "adrfam": "IPv4", 00:17:26.305 "traddr": "10.0.0.1", 00:17:26.305 "trsvcid": "52582" 00:17:26.305 }, 00:17:26.305 "auth": { 00:17:26.305 "state": "completed", 00:17:26.305 "digest": "sha512", 00:17:26.305 "dhgroup": "ffdhe6144" 00:17:26.305 } 00:17:26.305 } 00:17:26.305 ]' 00:17:26.305 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.305 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.305 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.564 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.564 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.564 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.564 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.564 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.822 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:17:26.823 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.390 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.957 00:17:27.957 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.957 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.957 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.957 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.957 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.957 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.957 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.957 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.957 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.957 { 00:17:27.957 "cntlid": 135, 00:17:27.957 "qid": 0, 00:17:27.957 "state": "enabled", 00:17:27.957 "thread": "nvmf_tgt_poll_group_000", 00:17:27.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:27.957 "listen_address": { 00:17:27.957 "trtype": "TCP", 00:17:27.957 "adrfam": "IPv4", 00:17:27.957 "traddr": "10.0.0.2", 00:17:27.957 "trsvcid": "4420" 00:17:27.957 }, 00:17:27.957 "peer_address": { 00:17:27.957 "trtype": "TCP", 00:17:27.957 "adrfam": "IPv4", 00:17:27.957 "traddr": "10.0.0.1", 00:17:27.957 "trsvcid": "52596" 00:17:27.957 }, 00:17:27.957 "auth": { 00:17:27.957 "state": "completed", 00:17:27.957 "digest": "sha512", 00:17:27.957 "dhgroup": "ffdhe6144" 00:17:27.957 } 00:17:27.957 } 00:17:27.957 ]' 00:17:27.957 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.957 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.957 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.216 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.216 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.216 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.216 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.216 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.474 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:28.474 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.041 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.609 00:17:29.609 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.609 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.609 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.868 { 00:17:29.868 "cntlid": 137, 00:17:29.868 "qid": 0, 00:17:29.868 "state": "enabled", 00:17:29.868 "thread": "nvmf_tgt_poll_group_000", 00:17:29.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:29.868 "listen_address": { 00:17:29.868 "trtype": "TCP", 00:17:29.868 "adrfam": "IPv4", 00:17:29.868 "traddr": "10.0.0.2", 00:17:29.868 "trsvcid": "4420" 00:17:29.868 }, 00:17:29.868 "peer_address": { 00:17:29.868 "trtype": "TCP", 00:17:29.868 "adrfam": "IPv4", 00:17:29.868 "traddr": "10.0.0.1", 00:17:29.868 "trsvcid": "52612" 00:17:29.868 }, 00:17:29.868 "auth": { 00:17:29.868 "state": "completed", 00:17:29.868 "digest": "sha512", 00:17:29.868 "dhgroup": "ffdhe8192" 00:17:29.868 } 00:17:29.868 } 00:17:29.868 ]' 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.868 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.127 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:17:30.127 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:17:30.695 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.695 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:30.695 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.695 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.695 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.695 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.695 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:30.695 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.954 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.521 00:17:31.521 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.521 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.521 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.521 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.521 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.521 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.521 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.780 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.780 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.780 { 00:17:31.780 "cntlid": 139, 00:17:31.780 "qid": 0, 00:17:31.780 "state": "enabled", 00:17:31.780 "thread": "nvmf_tgt_poll_group_000", 00:17:31.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:31.780 "listen_address": { 00:17:31.780 "trtype": "TCP", 00:17:31.780 "adrfam": "IPv4", 00:17:31.780 "traddr": "10.0.0.2", 00:17:31.780 "trsvcid": "4420" 00:17:31.780 }, 00:17:31.780 "peer_address": { 00:17:31.780 "trtype": "TCP", 00:17:31.780 "adrfam": "IPv4", 00:17:31.780 "traddr": "10.0.0.1", 00:17:31.780 "trsvcid": "46482" 00:17:31.780 }, 00:17:31.780 "auth": { 00:17:31.780 "state": "completed", 00:17:31.780 "digest": "sha512", 00:17:31.780 "dhgroup": "ffdhe8192" 00:17:31.780 } 00:17:31.780 } 00:17:31.780 ]' 00:17:31.780 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.780 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.780 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.780 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.780 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.780 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.780 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.780 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.039 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:17:32.039 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: --dhchap-ctrl-secret DHHC-1:02:NjlmM2MxMDViNmVjMTA2YjdlODYwZDZmMTQ1NDczYWVlZGI0OWRlMjQ0YjBkNzQyO9Tz5w==: 00:17:32.607 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.607 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:32.607 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.607 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.607 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.607 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.607 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.607 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.866 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.125 00:17:33.384 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.384 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.384 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.384 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.384 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.384 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.384 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.384 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.384 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.384 { 00:17:33.384 "cntlid": 141, 00:17:33.384 "qid": 0, 00:17:33.384 "state": "enabled", 00:17:33.384 "thread": "nvmf_tgt_poll_group_000", 00:17:33.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:33.384 "listen_address": { 00:17:33.384 "trtype": "TCP", 00:17:33.384 "adrfam": "IPv4", 00:17:33.384 "traddr": "10.0.0.2", 00:17:33.384 "trsvcid": "4420" 00:17:33.384 }, 00:17:33.384 "peer_address": { 00:17:33.384 "trtype": "TCP", 00:17:33.384 "adrfam": "IPv4", 00:17:33.384 "traddr": "10.0.0.1", 00:17:33.384 "trsvcid": "46522" 00:17:33.384 }, 00:17:33.384 "auth": { 00:17:33.384 "state": "completed", 00:17:33.384 "digest": "sha512", 00:17:33.384 "dhgroup": "ffdhe8192" 00:17:33.384 } 00:17:33.384 } 00:17:33.384 ]' 00:17:33.384 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.384 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.384 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.642 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.642 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.642 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.642 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.642 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.901 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:17:33.901 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:01:NjQ4MDI2MjE1MDFhYTJlNTlkYmE1MjA2ZjYyNzA2YWMD6+hY: 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.468 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.469 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.469 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:34.469 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.469 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.469 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.469 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.469 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.469 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.037 00:17:35.037 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.037 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.037 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.296 { 00:17:35.296 "cntlid": 143, 00:17:35.296 "qid": 0, 00:17:35.296 "state": "enabled", 00:17:35.296 "thread": "nvmf_tgt_poll_group_000", 00:17:35.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:35.296 "listen_address": { 00:17:35.296 "trtype": "TCP", 00:17:35.296 "adrfam": "IPv4", 00:17:35.296 "traddr": "10.0.0.2", 00:17:35.296 "trsvcid": "4420" 00:17:35.296 }, 00:17:35.296 "peer_address": { 00:17:35.296 "trtype": "TCP", 00:17:35.296 "adrfam": "IPv4", 00:17:35.296 "traddr": "10.0.0.1", 00:17:35.296 "trsvcid": "46546" 00:17:35.296 }, 00:17:35.296 "auth": { 00:17:35.296 "state": "completed", 00:17:35.296 "digest": "sha512", 00:17:35.296 "dhgroup": "ffdhe8192" 00:17:35.296 } 00:17:35.296 } 00:17:35.296 ]' 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.296 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.554 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:35.554 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:36.121 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.121 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:36.121 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.121 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.121 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.121 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:36.121 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:36.121 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:36.121 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.121 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.121 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.380 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.947 00:17:36.947 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.947 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.947 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.947 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.947 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.947 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.947 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.947 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.947 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.947 { 00:17:36.947 "cntlid": 145, 00:17:36.947 "qid": 0, 00:17:36.947 "state": "enabled", 00:17:36.947 "thread": "nvmf_tgt_poll_group_000", 00:17:36.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:36.947 "listen_address": { 00:17:36.947 "trtype": "TCP", 00:17:36.947 "adrfam": "IPv4", 00:17:36.947 "traddr": "10.0.0.2", 00:17:36.947 "trsvcid": "4420" 00:17:36.947 }, 00:17:36.947 "peer_address": { 00:17:36.947 "trtype": "TCP", 00:17:36.947 "adrfam": "IPv4", 00:17:36.947 "traddr": "10.0.0.1", 00:17:36.947 "trsvcid": "46564" 00:17:36.947 }, 00:17:36.947 "auth": { 00:17:36.947 "state": "completed", 00:17:36.947 "digest": "sha512", 00:17:36.947 "dhgroup": "ffdhe8192" 00:17:36.947 } 00:17:36.947 } 00:17:36.947 ]' 00:17:36.947 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.206 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.206 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.206 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.206 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.206 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.206 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.206 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.465 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:17:37.465 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YThlODI1MjZjY2E0MDFjMDc3YTI4NzFlZmFjYTI4MGVmOWI4ZTMwYTM3NDYxYjAxNPq7KQ==: --dhchap-ctrl-secret DHHC-1:03:MDMzM2Y1MDNmMzMxNzdjMzdjYmFiMWQwNGM4YzcyMzQzOWQ3MzBmOGI1MThhM2JiYzUzZmI3NDg2NDdlOGNkMxi//sQ=: 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:38.033 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:38.291 request: 00:17:38.291 { 00:17:38.291 "name": "nvme0", 00:17:38.291 "trtype": "tcp", 00:17:38.291 "traddr": "10.0.0.2", 00:17:38.291 "adrfam": "ipv4", 00:17:38.291 "trsvcid": "4420", 00:17:38.291 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:38.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:38.291 "prchk_reftag": false, 00:17:38.291 "prchk_guard": false, 00:17:38.291 "hdgst": false, 00:17:38.291 "ddgst": false, 00:17:38.291 "dhchap_key": "key2", 00:17:38.291 "allow_unrecognized_csi": false, 00:17:38.291 "method": "bdev_nvme_attach_controller", 00:17:38.291 "req_id": 1 00:17:38.291 } 00:17:38.291 Got JSON-RPC error response 00:17:38.291 response: 00:17:38.291 { 00:17:38.291 "code": -5, 00:17:38.291 "message": "Input/output error" 00:17:38.291 } 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.549 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.807 request: 00:17:38.807 { 00:17:38.807 "name": "nvme0", 00:17:38.807 "trtype": "tcp", 00:17:38.807 "traddr": "10.0.0.2", 00:17:38.807 "adrfam": "ipv4", 00:17:38.807 "trsvcid": "4420", 00:17:38.807 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:38.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:38.807 "prchk_reftag": false, 00:17:38.807 "prchk_guard": false, 00:17:38.807 "hdgst": false, 00:17:38.807 "ddgst": false, 00:17:38.807 "dhchap_key": "key1", 00:17:38.807 "dhchap_ctrlr_key": "ckey2", 00:17:38.807 "allow_unrecognized_csi": false, 00:17:38.807 "method": "bdev_nvme_attach_controller", 00:17:38.807 "req_id": 1 00:17:38.807 } 00:17:38.807 Got JSON-RPC error response 00:17:38.807 response: 00:17:38.807 { 00:17:38.807 "code": -5, 00:17:38.807 "message": "Input/output error" 00:17:38.807 } 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.807 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.808 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.375 request: 00:17:39.375 { 00:17:39.375 "name": "nvme0", 00:17:39.375 "trtype": "tcp", 00:17:39.375 "traddr": "10.0.0.2", 00:17:39.375 "adrfam": "ipv4", 00:17:39.375 "trsvcid": "4420", 00:17:39.375 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:39.375 "prchk_reftag": false, 00:17:39.375 "prchk_guard": false, 00:17:39.375 "hdgst": false, 00:17:39.375 "ddgst": false, 00:17:39.375 "dhchap_key": "key1", 00:17:39.375 "dhchap_ctrlr_key": "ckey1", 00:17:39.375 "allow_unrecognized_csi": false, 00:17:39.375 "method": "bdev_nvme_attach_controller", 00:17:39.375 "req_id": 1 00:17:39.375 } 00:17:39.375 Got JSON-RPC error response 00:17:39.375 response: 00:17:39.375 { 00:17:39.375 "code": -5, 00:17:39.375 "message": "Input/output error" 00:17:39.375 } 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1882597 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1882597 ']' 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1882597 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1882597 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1882597' 00:17:39.375 killing process with pid 1882597 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1882597 00:17:39.375 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1882597 00:17:39.635 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:39.635 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:39.635 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.635 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.635 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1904374 00:17:39.635 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:39.635 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1904374 00:17:39.635 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1904374 ']' 00:17:39.635 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.635 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.635 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.635 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.635 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1904374 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1904374 ']' 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.893 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.153 null0 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GIT 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.JKB ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JKB 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.hfT 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.1gI ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1gI 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Zwa 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.J7w ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J7w 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.W6A 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.153 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.089 nvme0n1 00:17:41.089 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.089 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.089 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.089 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.089 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.089 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.089 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.089 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.089 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.089 { 00:17:41.089 "cntlid": 1, 00:17:41.089 "qid": 0, 00:17:41.089 "state": "enabled", 00:17:41.089 "thread": "nvmf_tgt_poll_group_000", 00:17:41.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:41.089 "listen_address": { 00:17:41.089 "trtype": "TCP", 00:17:41.089 "adrfam": "IPv4", 00:17:41.089 "traddr": "10.0.0.2", 00:17:41.089 "trsvcid": "4420" 00:17:41.089 }, 00:17:41.089 "peer_address": { 00:17:41.089 "trtype": "TCP", 00:17:41.089 "adrfam": "IPv4", 00:17:41.089 "traddr": "10.0.0.1", 00:17:41.089 "trsvcid": "47402" 00:17:41.089 }, 00:17:41.090 "auth": { 00:17:41.090 "state": "completed", 00:17:41.090 "digest": "sha512", 00:17:41.090 "dhgroup": "ffdhe8192" 00:17:41.090 } 00:17:41.090 } 00:17:41.090 ]' 00:17:41.090 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.348 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.348 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.348 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.348 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.348 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.348 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.348 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.607 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:41.607 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:42.175 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.175 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:42.175 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.175 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.175 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.175 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:42.175 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.175 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.175 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.175 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:42.175 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:42.433 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:42.433 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:42.433 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:42.433 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:42.433 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.433 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:42.433 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.433 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.433 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.433 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.433 request: 00:17:42.433 { 00:17:42.433 "name": "nvme0", 00:17:42.433 "trtype": "tcp", 00:17:42.433 "traddr": "10.0.0.2", 00:17:42.433 "adrfam": "ipv4", 00:17:42.433 "trsvcid": "4420", 00:17:42.433 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:42.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:42.433 "prchk_reftag": false, 00:17:42.433 "prchk_guard": false, 00:17:42.433 "hdgst": false, 00:17:42.433 "ddgst": false, 00:17:42.433 "dhchap_key": "key3", 00:17:42.433 "allow_unrecognized_csi": false, 00:17:42.433 "method": "bdev_nvme_attach_controller", 00:17:42.433 "req_id": 1 00:17:42.433 } 00:17:42.433 Got JSON-RPC error response 00:17:42.433 response: 00:17:42.433 { 00:17:42.433 "code": -5, 00:17:42.433 "message": "Input/output error" 00:17:42.433 } 00:17:42.692 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:42.692 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:42.692 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:42.692 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:42.692 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:42.692 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:42.692 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:42.692 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:42.692 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:42.692 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:42.692 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:42.692 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:42.692 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.692 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:42.692 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.692 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.692 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.692 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.951 request: 00:17:42.951 { 00:17:42.951 "name": "nvme0", 00:17:42.951 "trtype": "tcp", 00:17:42.951 "traddr": "10.0.0.2", 00:17:42.951 "adrfam": "ipv4", 00:17:42.951 "trsvcid": "4420", 00:17:42.951 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:42.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:42.951 "prchk_reftag": false, 00:17:42.951 "prchk_guard": false, 00:17:42.951 "hdgst": false, 00:17:42.951 "ddgst": false, 00:17:42.951 "dhchap_key": "key3", 00:17:42.951 "allow_unrecognized_csi": false, 00:17:42.951 "method": "bdev_nvme_attach_controller", 00:17:42.951 "req_id": 1 00:17:42.951 } 00:17:42.951 Got JSON-RPC error response 00:17:42.951 response: 00:17:42.951 { 00:17:42.951 "code": -5, 00:17:42.951 "message": "Input/output error" 00:17:42.951 } 00:17:42.951 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:42.951 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:42.951 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:42.951 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:42.951 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:42.951 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:42.951 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:42.951 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.951 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.951 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.210 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.469 request: 00:17:43.469 { 00:17:43.469 "name": "nvme0", 00:17:43.469 "trtype": "tcp", 00:17:43.469 "traddr": "10.0.0.2", 00:17:43.469 "adrfam": "ipv4", 00:17:43.469 "trsvcid": "4420", 00:17:43.469 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:43.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:43.469 "prchk_reftag": false, 00:17:43.470 "prchk_guard": false, 00:17:43.470 "hdgst": false, 00:17:43.470 "ddgst": false, 00:17:43.470 "dhchap_key": "key0", 00:17:43.470 "dhchap_ctrlr_key": "key1", 00:17:43.470 "allow_unrecognized_csi": false, 00:17:43.470 "method": "bdev_nvme_attach_controller", 00:17:43.470 "req_id": 1 00:17:43.470 } 00:17:43.470 Got JSON-RPC error response 00:17:43.470 response: 00:17:43.470 { 00:17:43.470 "code": -5, 00:17:43.470 "message": "Input/output error" 00:17:43.470 } 00:17:43.470 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:43.470 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.470 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.470 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.470 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:43.470 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:43.470 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:43.728 nvme0n1 00:17:43.728 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:43.728 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:43.728 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.986 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.986 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.986 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.244 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:44.244 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.244 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.244 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.244 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:44.244 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:44.244 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:45.179 nvme0n1 00:17:45.179 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:45.179 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:45.179 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.179 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.179 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:45.179 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.179 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.179 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.179 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:45.179 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.179 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:45.437 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.437 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:45.437 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: --dhchap-ctrl-secret DHHC-1:03:OWY5ZDkzMGQyMDNjNGQxYWQ0MDFjOGNjNzI2MzNjMmU2ZjY2NmE0ZjY3ODAxODU0YjBhZDMwYzU0ZWIzZDhhMWPi5Rw=: 00:17:46.004 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:46.004 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:46.004 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:46.004 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:46.004 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:46.004 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:46.004 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:46.004 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.004 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.262 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:46.262 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:46.262 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:46.262 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:46.262 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.262 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:46.262 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.262 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:46.262 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:46.262 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:46.520 request: 00:17:46.520 { 00:17:46.520 "name": "nvme0", 00:17:46.520 "trtype": "tcp", 00:17:46.520 "traddr": "10.0.0.2", 00:17:46.520 "adrfam": "ipv4", 00:17:46.520 "trsvcid": "4420", 00:17:46.520 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:46.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:46.520 "prchk_reftag": false, 00:17:46.520 "prchk_guard": false, 00:17:46.520 "hdgst": false, 00:17:46.520 "ddgst": false, 00:17:46.520 "dhchap_key": "key1", 00:17:46.520 "allow_unrecognized_csi": false, 00:17:46.520 "method": "bdev_nvme_attach_controller", 00:17:46.520 "req_id": 1 00:17:46.520 } 00:17:46.520 Got JSON-RPC error response 00:17:46.521 response: 00:17:46.521 { 00:17:46.521 "code": -5, 00:17:46.521 "message": "Input/output error" 00:17:46.521 } 00:17:46.521 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:46.521 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:46.521 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:46.521 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:46.521 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:46.521 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:46.521 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:47.456 nvme0n1 00:17:47.456 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:47.456 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:47.456 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.456 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.456 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.456 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.714 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:47.714 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.714 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.714 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.714 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:47.714 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:47.714 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:47.971 nvme0n1 00:17:47.971 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:47.971 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:47.971 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.229 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.229 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.229 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: '' 2s 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: ]] 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YWQ3YmRmNjIyOTIzNmE5MjY4Nzg3NTJjNDg5ZmI5ODa/mNZI: 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:48.486 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: 2s 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: ]] 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OTAyNzI2MjNmNTRiZWY3ZDBjNmMyYzJlNTYzNzhlODNlODg5NGU2ZmViOTJmNzgyrqJZfw==: 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:50.385 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:52.382 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:52.382 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:52.382 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:52.382 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:52.641 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:52.641 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:52.641 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:52.641 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.641 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.641 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.641 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.641 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.641 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:52.641 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:52.641 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:53.207 nvme0n1 00:17:53.208 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.208 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.208 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.208 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.208 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.208 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.775 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:53.775 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:53.775 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.034 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.034 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:54.034 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.034 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.034 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.034 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:54.034 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.292 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.859 request: 00:17:54.859 { 00:17:54.859 "name": "nvme0", 00:17:54.859 "dhchap_key": "key1", 00:17:54.859 "dhchap_ctrlr_key": "key3", 00:17:54.859 "method": "bdev_nvme_set_keys", 00:17:54.859 "req_id": 1 00:17:54.859 } 00:17:54.859 Got JSON-RPC error response 00:17:54.859 response: 00:17:54.859 { 00:17:54.859 "code": -13, 00:17:54.859 "message": "Permission denied" 00:17:54.859 } 00:17:54.859 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:54.859 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.859 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.859 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.859 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:54.859 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:54.859 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.117 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:55.117 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:56.052 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:56.052 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:56.052 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.310 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:56.310 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:56.310 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.310 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.310 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.310 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:56.310 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:56.310 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:57.245 nvme0n1 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.245 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.503 request: 00:17:57.503 { 00:17:57.503 "name": "nvme0", 00:17:57.503 "dhchap_key": "key2", 00:17:57.503 "dhchap_ctrlr_key": "key0", 00:17:57.503 "method": "bdev_nvme_set_keys", 00:17:57.503 "req_id": 1 00:17:57.503 } 00:17:57.503 Got JSON-RPC error response 00:17:57.503 response: 00:17:57.503 { 00:17:57.503 "code": -13, 00:17:57.503 "message": "Permission denied" 00:17:57.503 } 00:17:57.503 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:57.503 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.503 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.503 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.503 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:57.503 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:57.503 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.762 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:57.762 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:58.698 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:58.698 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:58.698 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1882693 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1882693 ']' 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1882693 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1882693 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1882693' 00:17:58.957 killing process with pid 1882693 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1882693 00:17:58.957 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1882693 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.216 rmmod nvme_tcp 00:17:59.216 rmmod nvme_fabrics 00:17:59.216 rmmod nvme_keyring 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1904374 ']' 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1904374 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1904374 ']' 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1904374 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.216 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1904374 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1904374' 00:17:59.476 killing process with pid 1904374 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1904374 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1904374 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.476 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.GIT /tmp/spdk.key-sha256.hfT /tmp/spdk.key-sha384.Zwa /tmp/spdk.key-sha512.W6A /tmp/spdk.key-sha512.JKB /tmp/spdk.key-sha384.1gI /tmp/spdk.key-sha256.J7w '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:02.014 00:18:02.014 real 2m31.782s 00:18:02.014 user 5m50.108s 00:18:02.014 sys 0m24.068s 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.014 ************************************ 00:18:02.014 END TEST nvmf_auth_target 00:18:02.014 ************************************ 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.014 ************************************ 00:18:02.014 START TEST nvmf_bdevio_no_huge 00:18:02.014 ************************************ 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:02.014 * Looking for test storage... 00:18:02.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.014 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:02.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.015 --rc genhtml_branch_coverage=1 00:18:02.015 --rc genhtml_function_coverage=1 00:18:02.015 --rc genhtml_legend=1 00:18:02.015 --rc geninfo_all_blocks=1 00:18:02.015 --rc geninfo_unexecuted_blocks=1 00:18:02.015 00:18:02.015 ' 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:02.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.015 --rc genhtml_branch_coverage=1 00:18:02.015 --rc genhtml_function_coverage=1 00:18:02.015 --rc genhtml_legend=1 00:18:02.015 --rc geninfo_all_blocks=1 00:18:02.015 --rc geninfo_unexecuted_blocks=1 00:18:02.015 00:18:02.015 ' 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:02.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.015 --rc genhtml_branch_coverage=1 00:18:02.015 --rc genhtml_function_coverage=1 00:18:02.015 --rc genhtml_legend=1 00:18:02.015 --rc geninfo_all_blocks=1 00:18:02.015 --rc geninfo_unexecuted_blocks=1 00:18:02.015 00:18:02.015 ' 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:02.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.015 --rc genhtml_branch_coverage=1 00:18:02.015 --rc genhtml_function_coverage=1 00:18:02.015 --rc genhtml_legend=1 00:18:02.015 --rc geninfo_all_blocks=1 00:18:02.015 --rc geninfo_unexecuted_blocks=1 00:18:02.015 00:18:02.015 ' 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:02.015 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.584 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:08.584 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:08.584 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:08.584 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:08.584 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:08.584 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:08.584 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:08.584 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:08.584 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:08.584 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:08.584 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:08.584 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:08.585 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:08.585 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:08.585 Found net devices under 0000:af:00.0: cvl_0_0 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:08.585 Found net devices under 0000:af:00.1: cvl_0_1 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:08.585 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:08.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:18:08.585 00:18:08.585 --- 10.0.0.2 ping statistics --- 00:18:08.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.585 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:08.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:18:08.585 00:18:08.585 --- 10.0.0.1 ping statistics --- 00:18:08.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.585 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:08.585 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:08.586 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.586 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1911166 00:18:08.586 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1911166 00:18:08.586 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:08.586 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1911166 ']' 00:18:08.586 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.586 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.586 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.586 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.586 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.586 [2024-12-09 17:28:34.296643] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:18:08.586 [2024-12-09 17:28:34.296688] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:08.586 [2024-12-09 17:28:34.381296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:08.586 [2024-12-09 17:28:34.431889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.586 [2024-12-09 17:28:34.431924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.586 [2024-12-09 17:28:34.431933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.586 [2024-12-09 17:28:34.431942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.586 [2024-12-09 17:28:34.431948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.586 [2024-12-09 17:28:34.433087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:08.586 [2024-12-09 17:28:34.433206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:08.586 [2024-12-09 17:28:34.433237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:08.586 [2024-12-09 17:28:34.433237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.843 [2024-12-09 17:28:35.174826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.843 Malloc0 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.843 [2024-12-09 17:28:35.219115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:08.843 { 00:18:08.843 "params": { 00:18:08.843 "name": "Nvme$subsystem", 00:18:08.843 "trtype": "$TEST_TRANSPORT", 00:18:08.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.843 "adrfam": "ipv4", 00:18:08.843 "trsvcid": "$NVMF_PORT", 00:18:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.843 "hdgst": ${hdgst:-false}, 00:18:08.843 "ddgst": ${ddgst:-false} 00:18:08.843 }, 00:18:08.843 "method": "bdev_nvme_attach_controller" 00:18:08.843 } 00:18:08.843 EOF 00:18:08.843 )") 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:08.843 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:08.843 "params": { 00:18:08.843 "name": "Nvme1", 00:18:08.843 "trtype": "tcp", 00:18:08.843 "traddr": "10.0.0.2", 00:18:08.843 "adrfam": "ipv4", 00:18:08.843 "trsvcid": "4420", 00:18:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.843 "hdgst": false, 00:18:08.843 "ddgst": false 00:18:08.843 }, 00:18:08.843 "method": "bdev_nvme_attach_controller" 00:18:08.843 }' 00:18:08.843 [2024-12-09 17:28:35.268316] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:18:08.843 [2024-12-09 17:28:35.268357] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1911358 ] 00:18:08.843 [2024-12-09 17:28:35.345419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:09.100 [2024-12-09 17:28:35.393458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.100 [2024-12-09 17:28:35.393581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.100 [2024-12-09 17:28:35.393582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.100 I/O targets: 00:18:09.100 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:09.100 00:18:09.100 00:18:09.100 CUnit - A unit testing framework for C - Version 2.1-3 00:18:09.100 http://cunit.sourceforge.net/ 00:18:09.100 00:18:09.100 00:18:09.100 Suite: bdevio tests on: Nvme1n1 00:18:09.100 Test: blockdev write read block ...passed 00:18:09.357 Test: blockdev write zeroes read block ...passed 00:18:09.357 Test: blockdev write zeroes read no split ...passed 00:18:09.357 Test: blockdev write zeroes read split ...passed 00:18:09.357 Test: blockdev write zeroes read split partial ...passed 00:18:09.357 Test: blockdev reset ...[2024-12-09 17:28:35.763911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:09.357 [2024-12-09 17:28:35.763976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a5be0 (9): Bad file descriptor 00:18:09.357 [2024-12-09 17:28:35.777140] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:09.357 passed 00:18:09.357 Test: blockdev write read 8 blocks ...passed 00:18:09.357 Test: blockdev write read size > 128k ...passed 00:18:09.357 Test: blockdev write read invalid size ...passed 00:18:09.357 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:09.357 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:09.357 Test: blockdev write read max offset ...passed 00:18:09.614 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:09.614 Test: blockdev writev readv 8 blocks ...passed 00:18:09.614 Test: blockdev writev readv 30 x 1block ...passed 00:18:09.614 Test: blockdev writev readv block ...passed 00:18:09.614 Test: blockdev writev readv size > 128k ...passed 00:18:09.614 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:09.614 Test: blockdev comparev and writev ...[2024-12-09 17:28:35.987853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.614 [2024-12-09 17:28:35.987883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.614 [2024-12-09 17:28:35.987896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.614 [2024-12-09 17:28:35.987904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.614 [2024-12-09 17:28:35.988122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.614 [2024-12-09 17:28:35.988132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:09.614 [2024-12-09 17:28:35.988144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.614 [2024-12-09 17:28:35.988151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:09.614 [2024-12-09 17:28:35.988389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.614 [2024-12-09 17:28:35.988400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:09.614 [2024-12-09 17:28:35.988411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.614 [2024-12-09 17:28:35.988418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:09.614 [2024-12-09 17:28:35.988648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.614 [2024-12-09 17:28:35.988658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:09.614 [2024-12-09 17:28:35.988674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.614 [2024-12-09 17:28:35.988681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:09.614 passed 00:18:09.614 Test: blockdev nvme passthru rw ...passed 00:18:09.614 Test: blockdev nvme passthru vendor specific ...[2024-12-09 17:28:36.070522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.614 [2024-12-09 17:28:36.070540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:09.614 [2024-12-09 17:28:36.070641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.614 [2024-12-09 17:28:36.070652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:09.614 [2024-12-09 17:28:36.070747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.614 [2024-12-09 17:28:36.070757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:09.614 [2024-12-09 17:28:36.070859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.614 [2024-12-09 17:28:36.070870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:09.614 passed 00:18:09.614 Test: blockdev nvme admin passthru ...passed 00:18:09.614 Test: blockdev copy ...passed 00:18:09.614 00:18:09.614 Run Summary: Type Total Ran Passed Failed Inactive 00:18:09.614 suites 1 1 n/a 0 0 00:18:09.614 tests 23 23 23 0 0 00:18:09.614 asserts 152 152 152 0 n/a 00:18:09.614 00:18:09.614 Elapsed time = 1.145 seconds 00:18:09.872 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.872 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.872 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.872 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.872 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:09.872 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:09.872 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:09.872 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:09.872 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:09.872 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:09.872 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:09.872 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:09.872 rmmod nvme_tcp 00:18:09.872 rmmod nvme_fabrics 00:18:10.129 rmmod nvme_keyring 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1911166 ']' 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1911166 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1911166 ']' 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1911166 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1911166 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1911166' 00:18:10.129 killing process with pid 1911166 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1911166 00:18:10.129 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1911166 00:18:10.388 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:10.388 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:10.388 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:10.388 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:10.388 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:10.388 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:10.388 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:10.388 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:10.389 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:10.389 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.389 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.389 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.925 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:12.925 00:18:12.925 real 0m10.766s 00:18:12.925 user 0m13.032s 00:18:12.925 sys 0m5.355s 00:18:12.925 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.925 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.925 ************************************ 00:18:12.925 END TEST nvmf_bdevio_no_huge 00:18:12.925 ************************************ 00:18:12.925 17:28:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:12.925 17:28:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:12.925 17:28:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.925 17:28:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:12.925 ************************************ 00:18:12.925 START TEST nvmf_tls 00:18:12.925 ************************************ 00:18:12.925 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:12.925 * Looking for test storage... 00:18:12.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:12.925 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:12.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.926 --rc genhtml_branch_coverage=1 00:18:12.926 --rc genhtml_function_coverage=1 00:18:12.926 --rc genhtml_legend=1 00:18:12.926 --rc geninfo_all_blocks=1 00:18:12.926 --rc geninfo_unexecuted_blocks=1 00:18:12.926 00:18:12.926 ' 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:12.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.926 --rc genhtml_branch_coverage=1 00:18:12.926 --rc genhtml_function_coverage=1 00:18:12.926 --rc genhtml_legend=1 00:18:12.926 --rc geninfo_all_blocks=1 00:18:12.926 --rc geninfo_unexecuted_blocks=1 00:18:12.926 00:18:12.926 ' 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:12.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.926 --rc genhtml_branch_coverage=1 00:18:12.926 --rc genhtml_function_coverage=1 00:18:12.926 --rc genhtml_legend=1 00:18:12.926 --rc geninfo_all_blocks=1 00:18:12.926 --rc geninfo_unexecuted_blocks=1 00:18:12.926 00:18:12.926 ' 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:12.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.926 --rc genhtml_branch_coverage=1 00:18:12.926 --rc genhtml_function_coverage=1 00:18:12.926 --rc genhtml_legend=1 00:18:12.926 --rc geninfo_all_blocks=1 00:18:12.926 --rc geninfo_unexecuted_blocks=1 00:18:12.926 00:18:12.926 ' 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:12.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:12.926 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:19.496 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:19.497 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:19.497 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:19.497 Found net devices under 0000:af:00.0: cvl_0_0 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:19.497 Found net devices under 0000:af:00.1: cvl_0_1 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.497 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:19.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:18:19.497 00:18:19.497 --- 10.0.0.2 ping statistics --- 00:18:19.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.497 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:18:19.497 00:18:19.497 --- 10.0.0.1 ping statistics --- 00:18:19.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.497 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1915047 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1915047 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1915047 ']' 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.497 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.497 [2024-12-09 17:28:45.118881] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:18:19.497 [2024-12-09 17:28:45.118923] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.497 [2024-12-09 17:28:45.195959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.497 [2024-12-09 17:28:45.235696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.497 [2024-12-09 17:28:45.235732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.497 [2024-12-09 17:28:45.235741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.497 [2024-12-09 17:28:45.235747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.498 [2024-12-09 17:28:45.235753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.498 [2024-12-09 17:28:45.236246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:19.498 true 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.498 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:19.756 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:19.756 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:19.756 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:20.015 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.015 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:20.015 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:20.015 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:20.015 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.015 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:20.274 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:20.274 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:20.274 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:20.533 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.533 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:20.533 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:20.533 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:20.533 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:20.792 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:20.792 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Av9keFCV44 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.tyNddV1qCK 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Av9keFCV44 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.tyNddV1qCK 00:18:21.051 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:21.310 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:21.569 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Av9keFCV44 00:18:21.569 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Av9keFCV44 00:18:21.569 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:21.827 [2024-12-09 17:28:48.154727] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.827 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:21.827 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:22.086 [2024-12-09 17:28:48.527661] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.086 [2024-12-09 17:28:48.527874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.086 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:22.344 malloc0 00:18:22.344 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:22.603 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Av9keFCV44 00:18:22.603 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:22.861 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Av9keFCV44 00:18:32.838 Initializing NVMe Controllers 00:18:32.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:32.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:32.838 Initialization complete. Launching workers. 00:18:32.838 ======================================================== 00:18:32.838 Latency(us) 00:18:32.838 Device Information : IOPS MiB/s Average min max 00:18:32.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16858.14 65.85 3796.48 781.70 5304.23 00:18:32.838 ======================================================== 00:18:32.838 Total : 16858.14 65.85 3796.48 781.70 5304.23 00:18:32.838 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Av9keFCV44 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Av9keFCV44 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1917464 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1917464 /var/tmp/bdevperf.sock 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1917464 ']' 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.838 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.107 [2024-12-09 17:28:59.417667] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:18:33.107 [2024-12-09 17:28:59.417715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917464 ] 00:18:33.107 [2024-12-09 17:28:59.491365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.107 [2024-12-09 17:28:59.532039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.107 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.107 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:33.107 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Av9keFCV44 00:18:33.367 17:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.625 [2024-12-09 17:28:59.979623] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.625 TLSTESTn1 00:18:33.625 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:33.625 Running I/O for 10 seconds... 00:18:35.939 5386.00 IOPS, 21.04 MiB/s [2024-12-09T16:29:03.415Z] 5467.50 IOPS, 21.36 MiB/s [2024-12-09T16:29:04.351Z] 5552.67 IOPS, 21.69 MiB/s [2024-12-09T16:29:05.295Z] 5569.75 IOPS, 21.76 MiB/s [2024-12-09T16:29:06.233Z] 5558.00 IOPS, 21.71 MiB/s [2024-12-09T16:29:07.609Z] 5546.67 IOPS, 21.67 MiB/s [2024-12-09T16:29:08.546Z] 5574.00 IOPS, 21.77 MiB/s [2024-12-09T16:29:09.482Z] 5588.75 IOPS, 21.83 MiB/s [2024-12-09T16:29:10.419Z] 5594.56 IOPS, 21.85 MiB/s [2024-12-09T16:29:10.419Z] 5579.40 IOPS, 21.79 MiB/s 00:18:43.879 Latency(us) 00:18:43.879 [2024-12-09T16:29:10.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.879 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:43.879 Verification LBA range: start 0x0 length 0x2000 00:18:43.879 TLSTESTn1 : 10.01 5584.18 21.81 0.00 0.00 22886.17 6428.77 22469.49 00:18:43.879 [2024-12-09T16:29:10.419Z] =================================================================================================================== 00:18:43.879 [2024-12-09T16:29:10.419Z] Total : 5584.18 21.81 0.00 0.00 22886.17 6428.77 22469.49 00:18:43.879 { 00:18:43.879 "results": [ 00:18:43.879 { 00:18:43.879 "job": "TLSTESTn1", 00:18:43.879 "core_mask": "0x4", 00:18:43.879 "workload": "verify", 00:18:43.879 "status": "finished", 00:18:43.879 "verify_range": { 00:18:43.879 "start": 0, 00:18:43.879 "length": 8192 00:18:43.879 }, 00:18:43.879 "queue_depth": 128, 00:18:43.879 "io_size": 4096, 00:18:43.879 "runtime": 10.014009, 00:18:43.879 "iops": 5584.177126263817, 00:18:43.879 "mibps": 21.813191899468034, 00:18:43.879 "io_failed": 0, 00:18:43.879 "io_timeout": 0, 00:18:43.879 "avg_latency_us": 22886.167142720893, 00:18:43.879 "min_latency_us": 6428.769523809524, 00:18:43.879 "max_latency_us": 22469.485714285714 00:18:43.879 } 00:18:43.879 ], 00:18:43.879 "core_count": 1 00:18:43.879 } 00:18:43.879 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.879 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1917464 00:18:43.879 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1917464 ']' 00:18:43.879 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1917464 00:18:43.879 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:43.879 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.879 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1917464 00:18:43.879 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:43.879 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:43.879 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1917464' 00:18:43.879 killing process with pid 1917464 00:18:43.879 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1917464 00:18:43.879 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.879 00:18:43.879 Latency(us) 00:18:43.879 [2024-12-09T16:29:10.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.879 [2024-12-09T16:29:10.419Z] =================================================================================================================== 00:18:43.879 [2024-12-09T16:29:10.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.879 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1917464 00:18:44.138 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tyNddV1qCK 00:18:44.138 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:44.138 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tyNddV1qCK 00:18:44.138 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:44.138 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tyNddV1qCK 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tyNddV1qCK 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1919213 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1919213 /var/tmp/bdevperf.sock 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1919213 ']' 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.139 [2024-12-09 17:29:10.478307] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:18:44.139 [2024-12-09 17:29:10.478356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919213 ] 00:18:44.139 [2024-12-09 17:29:10.547652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.139 [2024-12-09 17:29:10.588686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.139 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tyNddV1qCK 00:18:44.398 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:44.656 [2024-12-09 17:29:11.049592] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.656 [2024-12-09 17:29:11.056439] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:44.656 [2024-12-09 17:29:11.056820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c603a0 (107): Transport endpoint is not connected 00:18:44.656 [2024-12-09 17:29:11.057813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c603a0 (9): Bad file descriptor 00:18:44.656 [2024-12-09 17:29:11.058815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:44.656 [2024-12-09 17:29:11.058825] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:44.656 [2024-12-09 17:29:11.058832] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:44.656 [2024-12-09 17:29:11.058842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:44.656 request: 00:18:44.656 { 00:18:44.656 "name": "TLSTEST", 00:18:44.656 "trtype": "tcp", 00:18:44.656 "traddr": "10.0.0.2", 00:18:44.656 "adrfam": "ipv4", 00:18:44.656 "trsvcid": "4420", 00:18:44.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.656 "prchk_reftag": false, 00:18:44.656 "prchk_guard": false, 00:18:44.656 "hdgst": false, 00:18:44.656 "ddgst": false, 00:18:44.656 "psk": "key0", 00:18:44.656 "allow_unrecognized_csi": false, 00:18:44.656 "method": "bdev_nvme_attach_controller", 00:18:44.656 "req_id": 1 00:18:44.656 } 00:18:44.656 Got JSON-RPC error response 00:18:44.656 response: 00:18:44.656 { 00:18:44.656 "code": -5, 00:18:44.656 "message": "Input/output error" 00:18:44.657 } 00:18:44.657 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1919213 00:18:44.657 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1919213 ']' 00:18:44.657 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1919213 00:18:44.657 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.657 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.657 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1919213 00:18:44.657 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:44.657 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:44.657 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1919213' 00:18:44.657 killing process with pid 1919213 00:18:44.657 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1919213 00:18:44.657 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.657 00:18:44.657 Latency(us) 00:18:44.657 [2024-12-09T16:29:11.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.657 [2024-12-09T16:29:11.197Z] =================================================================================================================== 00:18:44.657 [2024-12-09T16:29:11.197Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:44.657 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1919213 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Av9keFCV44 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Av9keFCV44 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Av9keFCV44 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Av9keFCV44 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1919349 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1919349 /var/tmp/bdevperf.sock 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1919349 ']' 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.916 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.916 [2024-12-09 17:29:11.332654] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:18:44.916 [2024-12-09 17:29:11.332703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919349 ] 00:18:44.916 [2024-12-09 17:29:11.401379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.916 [2024-12-09 17:29:11.437342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.175 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.175 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.175 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Av9keFCV44 00:18:45.434 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:45.434 [2024-12-09 17:29:11.921655] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.434 [2024-12-09 17:29:11.928557] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:45.434 [2024-12-09 17:29:11.928582] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:45.434 [2024-12-09 17:29:11.928606] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:45.434 [2024-12-09 17:29:11.928926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23913a0 (107): Transport endpoint is not connected 00:18:45.434 [2024-12-09 17:29:11.929919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23913a0 (9): Bad file descriptor 00:18:45.434 [2024-12-09 17:29:11.930921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:45.434 [2024-12-09 17:29:11.930932] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:45.434 [2024-12-09 17:29:11.930940] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:45.434 [2024-12-09 17:29:11.930950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:45.434 request: 00:18:45.434 { 00:18:45.434 "name": "TLSTEST", 00:18:45.434 "trtype": "tcp", 00:18:45.434 "traddr": "10.0.0.2", 00:18:45.434 "adrfam": "ipv4", 00:18:45.434 "trsvcid": "4420", 00:18:45.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.434 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:45.434 "prchk_reftag": false, 00:18:45.434 "prchk_guard": false, 00:18:45.434 "hdgst": false, 00:18:45.434 "ddgst": false, 00:18:45.434 "psk": "key0", 00:18:45.434 "allow_unrecognized_csi": false, 00:18:45.434 "method": "bdev_nvme_attach_controller", 00:18:45.434 "req_id": 1 00:18:45.434 } 00:18:45.434 Got JSON-RPC error response 00:18:45.434 response: 00:18:45.434 { 00:18:45.434 "code": -5, 00:18:45.434 "message": "Input/output error" 00:18:45.434 } 00:18:45.434 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1919349 00:18:45.435 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1919349 ']' 00:18:45.435 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1919349 00:18:45.435 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.435 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.435 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1919349 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1919349' 00:18:45.694 killing process with pid 1919349 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1919349 00:18:45.694 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.694 00:18:45.694 Latency(us) 00:18:45.694 [2024-12-09T16:29:12.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.694 [2024-12-09T16:29:12.234Z] =================================================================================================================== 00:18:45.694 [2024-12-09T16:29:12.234Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1919349 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Av9keFCV44 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Av9keFCV44 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Av9keFCV44 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Av9keFCV44 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1919576 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1919576 /var/tmp/bdevperf.sock 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1919576 ']' 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.694 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.694 [2024-12-09 17:29:12.212275] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:18:45.694 [2024-12-09 17:29:12.212323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919576 ] 00:18:45.954 [2024-12-09 17:29:12.277005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.954 [2024-12-09 17:29:12.317783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.954 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.954 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.954 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Av9keFCV44 00:18:46.212 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.472 [2024-12-09 17:29:12.766179] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.472 [2024-12-09 17:29:12.770755] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:46.472 [2024-12-09 17:29:12.770778] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:46.472 [2024-12-09 17:29:12.770802] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:46.472 [2024-12-09 17:29:12.771476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9743a0 (107): Transport endpoint is not connected 00:18:46.472 [2024-12-09 17:29:12.772469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9743a0 (9): Bad file descriptor 00:18:46.472 [2024-12-09 17:29:12.773470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:46.472 [2024-12-09 17:29:12.773481] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:46.472 [2024-12-09 17:29:12.773488] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:46.472 [2024-12-09 17:29:12.773499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:46.472 request: 00:18:46.472 { 00:18:46.472 "name": "TLSTEST", 00:18:46.472 "trtype": "tcp", 00:18:46.472 "traddr": "10.0.0.2", 00:18:46.472 "adrfam": "ipv4", 00:18:46.472 "trsvcid": "4420", 00:18:46.472 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:46.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.472 "prchk_reftag": false, 00:18:46.472 "prchk_guard": false, 00:18:46.472 "hdgst": false, 00:18:46.472 "ddgst": false, 00:18:46.472 "psk": "key0", 00:18:46.472 "allow_unrecognized_csi": false, 00:18:46.472 "method": "bdev_nvme_attach_controller", 00:18:46.472 "req_id": 1 00:18:46.472 } 00:18:46.472 Got JSON-RPC error response 00:18:46.472 response: 00:18:46.472 { 00:18:46.472 "code": -5, 00:18:46.472 "message": "Input/output error" 00:18:46.472 } 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1919576 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1919576 ']' 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1919576 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1919576 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1919576' 00:18:46.472 killing process with pid 1919576 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1919576 00:18:46.472 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.472 00:18:46.472 Latency(us) 00:18:46.472 [2024-12-09T16:29:13.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.472 [2024-12-09T16:29:13.012Z] =================================================================================================================== 00:18:46.472 [2024-12-09T16:29:13.012Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1919576 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:46.472 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1919592 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1919592 /var/tmp/bdevperf.sock 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1919592 ']' 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.472 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.731 [2024-12-09 17:29:13.051613] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:18:46.731 [2024-12-09 17:29:13.051658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919592 ] 00:18:46.731 [2024-12-09 17:29:13.126010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.731 [2024-12-09 17:29:13.163826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.731 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.731 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.731 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:46.989 [2024-12-09 17:29:13.439332] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:46.989 [2024-12-09 17:29:13.439364] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:46.989 request: 00:18:46.989 { 00:18:46.989 "name": "key0", 00:18:46.989 "path": "", 00:18:46.989 "method": "keyring_file_add_key", 00:18:46.989 "req_id": 1 00:18:46.989 } 00:18:46.989 Got JSON-RPC error response 00:18:46.989 response: 00:18:46.989 { 00:18:46.989 "code": -1, 00:18:46.989 "message": "Operation not permitted" 00:18:46.989 } 00:18:46.989 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.248 [2024-12-09 17:29:13.635927] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.248 [2024-12-09 17:29:13.635960] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:47.248 request: 00:18:47.248 { 00:18:47.248 "name": "TLSTEST", 00:18:47.248 "trtype": "tcp", 00:18:47.248 "traddr": "10.0.0.2", 00:18:47.248 "adrfam": "ipv4", 00:18:47.248 "trsvcid": "4420", 00:18:47.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.248 "prchk_reftag": false, 00:18:47.248 "prchk_guard": false, 00:18:47.248 "hdgst": false, 00:18:47.248 "ddgst": false, 00:18:47.248 "psk": "key0", 00:18:47.248 "allow_unrecognized_csi": false, 00:18:47.248 "method": "bdev_nvme_attach_controller", 00:18:47.248 "req_id": 1 00:18:47.248 } 00:18:47.248 Got JSON-RPC error response 00:18:47.248 response: 00:18:47.248 { 00:18:47.248 "code": -126, 00:18:47.248 "message": "Required key not available" 00:18:47.248 } 00:18:47.248 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1919592 00:18:47.248 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1919592 ']' 00:18:47.248 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1919592 00:18:47.248 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:47.248 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.248 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1919592 00:18:47.248 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:47.248 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:47.248 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1919592' 00:18:47.248 killing process with pid 1919592 00:18:47.248 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1919592 00:18:47.248 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.248 00:18:47.248 Latency(us) 00:18:47.248 [2024-12-09T16:29:13.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.248 [2024-12-09T16:29:13.788Z] =================================================================================================================== 00:18:47.248 [2024-12-09T16:29:13.788Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.248 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1919592 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1915047 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1915047 ']' 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1915047 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1915047 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1915047' 00:18:47.507 killing process with pid 1915047 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1915047 00:18:47.507 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1915047 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.8NZE8H98Z0 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.8NZE8H98Z0 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1919830 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1919830 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1919830 ']' 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.765 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.765 [2024-12-09 17:29:14.161283] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:18:47.765 [2024-12-09 17:29:14.161331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.765 [2024-12-09 17:29:14.237441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.765 [2024-12-09 17:29:14.272611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.765 [2024-12-09 17:29:14.272644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.765 [2024-12-09 17:29:14.272651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.765 [2024-12-09 17:29:14.272656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.765 [2024-12-09 17:29:14.272661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.765 [2024-12-09 17:29:14.273113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.024 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.024 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:48.024 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.024 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.024 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.024 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.024 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.8NZE8H98Z0 00:18:48.024 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8NZE8H98Z0 00:18:48.024 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:48.282 [2024-12-09 17:29:14.584978] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.282 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:48.282 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:48.541 [2024-12-09 17:29:14.949901] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:48.541 [2024-12-09 17:29:14.950101] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.541 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:48.800 malloc0 00:18:48.800 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:49.058 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8NZE8H98Z0 00:18:49.058 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8NZE8H98Z0 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8NZE8H98Z0 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1920086 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1920086 /var/tmp/bdevperf.sock 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1920086 ']' 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.317 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.317 [2024-12-09 17:29:15.746517] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:18:49.317 [2024-12-09 17:29:15.746562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920086 ] 00:18:49.317 [2024-12-09 17:29:15.823404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.575 [2024-12-09 17:29:15.864706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.575 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.575 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.575 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8NZE8H98Z0 00:18:49.833 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:49.833 [2024-12-09 17:29:16.317728] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.092 TLSTESTn1 00:18:50.092 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:50.092 Running I/O for 10 seconds... 00:18:51.988 5464.00 IOPS, 21.34 MiB/s [2024-12-09T16:29:19.533Z] 5484.00 IOPS, 21.42 MiB/s [2024-12-09T16:29:20.910Z] 5546.33 IOPS, 21.67 MiB/s [2024-12-09T16:29:21.846Z] 5538.50 IOPS, 21.63 MiB/s [2024-12-09T16:29:22.783Z] 5545.40 IOPS, 21.66 MiB/s [2024-12-09T16:29:23.719Z] 5557.67 IOPS, 21.71 MiB/s [2024-12-09T16:29:24.654Z] 5532.29 IOPS, 21.61 MiB/s [2024-12-09T16:29:25.589Z] 5546.00 IOPS, 21.66 MiB/s [2024-12-09T16:29:26.965Z] 5560.33 IOPS, 21.72 MiB/s [2024-12-09T16:29:26.965Z] 5566.30 IOPS, 21.74 MiB/s 00:19:00.425 Latency(us) 00:19:00.425 [2024-12-09T16:29:26.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.425 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:00.425 Verification LBA range: start 0x0 length 0x2000 00:19:00.425 TLSTESTn1 : 10.01 5571.00 21.76 0.00 0.00 22942.55 4743.56 31582.11 00:19:00.425 [2024-12-09T16:29:26.965Z] =================================================================================================================== 00:19:00.425 [2024-12-09T16:29:26.965Z] Total : 5571.00 21.76 0.00 0.00 22942.55 4743.56 31582.11 00:19:00.425 { 00:19:00.425 "results": [ 00:19:00.425 { 00:19:00.425 "job": "TLSTESTn1", 00:19:00.425 "core_mask": "0x4", 00:19:00.425 "workload": "verify", 00:19:00.425 "status": "finished", 00:19:00.425 "verify_range": { 00:19:00.425 "start": 0, 00:19:00.425 "length": 8192 00:19:00.425 }, 00:19:00.425 "queue_depth": 128, 00:19:00.425 "io_size": 4096, 00:19:00.425 "runtime": 10.014352, 00:19:00.425 "iops": 5571.00449434971, 00:19:00.425 "mibps": 21.761736306053553, 00:19:00.425 "io_failed": 0, 00:19:00.425 "io_timeout": 0, 00:19:00.425 "avg_latency_us": 22942.545168599936, 00:19:00.426 "min_latency_us": 4743.558095238095, 00:19:00.426 "max_latency_us": 31582.110476190475 00:19:00.426 } 00:19:00.426 ], 00:19:00.426 "core_count": 1 00:19:00.426 } 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1920086 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1920086 ']' 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1920086 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1920086 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1920086' 00:19:00.426 killing process with pid 1920086 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1920086 00:19:00.426 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.426 00:19:00.426 Latency(us) 00:19:00.426 [2024-12-09T16:29:26.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.426 [2024-12-09T16:29:26.966Z] =================================================================================================================== 00:19:00.426 [2024-12-09T16:29:26.966Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1920086 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.8NZE8H98Z0 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8NZE8H98Z0 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8NZE8H98Z0 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8NZE8H98Z0 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8NZE8H98Z0 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1921878 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1921878 /var/tmp/bdevperf.sock 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1921878 ']' 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.426 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.426 [2024-12-09 17:29:26.828099] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:00.426 [2024-12-09 17:29:26.828146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921878 ] 00:19:00.426 [2024-12-09 17:29:26.897229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.426 [2024-12-09 17:29:26.935377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.685 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.685 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:00.685 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8NZE8H98Z0 00:19:00.685 [2024-12-09 17:29:27.207381] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8NZE8H98Z0': 0100666 00:19:00.685 [2024-12-09 17:29:27.207408] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:00.685 request: 00:19:00.685 { 00:19:00.685 "name": "key0", 00:19:00.685 "path": "/tmp/tmp.8NZE8H98Z0", 00:19:00.685 "method": "keyring_file_add_key", 00:19:00.685 "req_id": 1 00:19:00.685 } 00:19:00.685 Got JSON-RPC error response 00:19:00.685 response: 00:19:00.685 { 00:19:00.685 "code": -1, 00:19:00.685 "message": "Operation not permitted" 00:19:00.685 } 00:19:00.685 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:00.944 [2024-12-09 17:29:27.391943] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.944 [2024-12-09 17:29:27.391974] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:00.944 request: 00:19:00.944 { 00:19:00.944 "name": "TLSTEST", 00:19:00.944 "trtype": "tcp", 00:19:00.944 "traddr": "10.0.0.2", 00:19:00.944 "adrfam": "ipv4", 00:19:00.944 "trsvcid": "4420", 00:19:00.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:00.945 "prchk_reftag": false, 00:19:00.945 "prchk_guard": false, 00:19:00.945 "hdgst": false, 00:19:00.945 "ddgst": false, 00:19:00.945 "psk": "key0", 00:19:00.945 "allow_unrecognized_csi": false, 00:19:00.945 "method": "bdev_nvme_attach_controller", 00:19:00.945 "req_id": 1 00:19:00.945 } 00:19:00.945 Got JSON-RPC error response 00:19:00.945 response: 00:19:00.945 { 00:19:00.945 "code": -126, 00:19:00.945 "message": "Required key not available" 00:19:00.945 } 00:19:00.945 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1921878 00:19:00.945 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1921878 ']' 00:19:00.945 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1921878 00:19:00.945 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.945 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.945 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1921878 00:19:00.945 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:00.945 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:00.945 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1921878' 00:19:00.945 killing process with pid 1921878 00:19:00.945 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1921878 00:19:00.945 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.945 00:19:00.945 Latency(us) 00:19:00.945 [2024-12-09T16:29:27.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.945 [2024-12-09T16:29:27.485Z] =================================================================================================================== 00:19:00.945 [2024-12-09T16:29:27.485Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:00.945 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1921878 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1919830 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1919830 ']' 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1919830 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1919830 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1919830' 00:19:01.204 killing process with pid 1919830 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1919830 00:19:01.204 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1919830 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1922115 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1922115 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1922115 ']' 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.463 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.463 [2024-12-09 17:29:27.886637] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:01.463 [2024-12-09 17:29:27.886685] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.463 [2024-12-09 17:29:27.964757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.463 [2024-12-09 17:29:28.001360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.463 [2024-12-09 17:29:28.001392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.463 [2024-12-09 17:29:28.001399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.463 [2024-12-09 17:29:28.001406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.463 [2024-12-09 17:29:28.001411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.463 [2024-12-09 17:29:28.001901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.8NZE8H98Z0 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.8NZE8H98Z0 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.8NZE8H98Z0 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8NZE8H98Z0 00:19:01.722 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:01.981 [2024-12-09 17:29:28.316534] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.981 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:02.240 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:02.240 [2024-12-09 17:29:28.701536] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:02.240 [2024-12-09 17:29:28.701739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.240 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:02.498 malloc0 00:19:02.498 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:02.756 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8NZE8H98Z0 00:19:02.756 [2024-12-09 17:29:29.286897] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8NZE8H98Z0': 0100666 00:19:02.756 [2024-12-09 17:29:29.286922] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:02.756 request: 00:19:02.756 { 00:19:02.756 "name": "key0", 00:19:02.756 "path": "/tmp/tmp.8NZE8H98Z0", 00:19:02.756 "method": "keyring_file_add_key", 00:19:02.756 "req_id": 1 00:19:02.756 } 00:19:02.756 Got JSON-RPC error response 00:19:02.756 response: 00:19:02.756 { 00:19:02.756 "code": -1, 00:19:02.756 "message": "Operation not permitted" 00:19:02.756 } 00:19:03.015 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:03.015 [2024-12-09 17:29:29.483435] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:03.015 [2024-12-09 17:29:29.483464] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:03.015 request: 00:19:03.015 { 00:19:03.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.015 "host": "nqn.2016-06.io.spdk:host1", 00:19:03.015 "psk": "key0", 00:19:03.015 "method": "nvmf_subsystem_add_host", 00:19:03.015 "req_id": 1 00:19:03.015 } 00:19:03.015 Got JSON-RPC error response 00:19:03.015 response: 00:19:03.015 { 00:19:03.015 "code": -32603, 00:19:03.015 "message": "Internal error" 00:19:03.015 } 00:19:03.015 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:03.015 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.015 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.015 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.015 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1922115 00:19:03.015 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1922115 ']' 00:19:03.015 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1922115 00:19:03.015 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:03.015 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.015 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1922115 00:19:03.274 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:03.274 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:03.274 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1922115' 00:19:03.274 killing process with pid 1922115 00:19:03.274 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1922115 00:19:03.274 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1922115 00:19:03.274 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.8NZE8H98Z0 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1922381 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1922381 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1922381 ']' 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.275 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.275 [2024-12-09 17:29:29.787183] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:03.275 [2024-12-09 17:29:29.787233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.534 [2024-12-09 17:29:29.865278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.534 [2024-12-09 17:29:29.902476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.534 [2024-12-09 17:29:29.902511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.534 [2024-12-09 17:29:29.902519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.534 [2024-12-09 17:29:29.902526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.534 [2024-12-09 17:29:29.902531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.534 [2024-12-09 17:29:29.903024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.534 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.534 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:03.534 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:03.534 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:03.534 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.534 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.534 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.8NZE8H98Z0 00:19:03.534 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8NZE8H98Z0 00:19:03.534 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:03.793 [2024-12-09 17:29:30.215407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.793 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:04.052 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:04.052 [2024-12-09 17:29:30.584336] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.052 [2024-12-09 17:29:30.584538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.311 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:04.311 malloc0 00:19:04.311 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:04.569 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8NZE8H98Z0 00:19:04.828 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.828 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1922756 00:19:04.828 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:04.828 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.828 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1922756 /var/tmp/bdevperf.sock 00:19:04.828 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1922756 ']' 00:19:04.828 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.087 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.087 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.087 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.087 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.087 [2024-12-09 17:29:31.405045] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:05.087 [2024-12-09 17:29:31.405101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922756 ] 00:19:05.087 [2024-12-09 17:29:31.481255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.087 [2024-12-09 17:29:31.520271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.087 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.087 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:05.087 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8NZE8H98Z0 00:19:05.345 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:05.604 [2024-12-09 17:29:31.972020] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.604 TLSTESTn1 00:19:05.604 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:05.863 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:05.863 "subsystems": [ 00:19:05.863 { 00:19:05.863 "subsystem": "keyring", 00:19:05.863 "config": [ 00:19:05.863 { 00:19:05.863 "method": "keyring_file_add_key", 00:19:05.863 "params": { 00:19:05.863 "name": "key0", 00:19:05.863 "path": "/tmp/tmp.8NZE8H98Z0" 00:19:05.863 } 00:19:05.863 } 00:19:05.863 ] 00:19:05.863 }, 00:19:05.863 { 00:19:05.863 "subsystem": "iobuf", 00:19:05.863 "config": [ 00:19:05.863 { 00:19:05.863 "method": "iobuf_set_options", 00:19:05.863 "params": { 00:19:05.863 "small_pool_count": 8192, 00:19:05.863 "large_pool_count": 1024, 00:19:05.863 "small_bufsize": 8192, 00:19:05.863 "large_bufsize": 135168, 00:19:05.863 "enable_numa": false 00:19:05.863 } 00:19:05.863 } 00:19:05.863 ] 00:19:05.863 }, 00:19:05.863 { 00:19:05.863 "subsystem": "sock", 00:19:05.863 "config": [ 00:19:05.863 { 00:19:05.863 "method": "sock_set_default_impl", 00:19:05.863 "params": { 00:19:05.863 "impl_name": "posix" 00:19:05.863 } 00:19:05.863 }, 00:19:05.863 { 00:19:05.863 "method": "sock_impl_set_options", 00:19:05.863 "params": { 00:19:05.863 "impl_name": "ssl", 00:19:05.863 "recv_buf_size": 4096, 00:19:05.863 "send_buf_size": 4096, 00:19:05.863 "enable_recv_pipe": true, 00:19:05.863 "enable_quickack": false, 00:19:05.863 "enable_placement_id": 0, 00:19:05.863 "enable_zerocopy_send_server": true, 00:19:05.863 "enable_zerocopy_send_client": false, 00:19:05.863 "zerocopy_threshold": 0, 00:19:05.863 "tls_version": 0, 00:19:05.863 "enable_ktls": false 00:19:05.863 } 00:19:05.863 }, 00:19:05.863 { 00:19:05.863 "method": "sock_impl_set_options", 00:19:05.863 "params": { 00:19:05.863 "impl_name": "posix", 00:19:05.863 "recv_buf_size": 2097152, 00:19:05.863 "send_buf_size": 2097152, 00:19:05.863 "enable_recv_pipe": true, 00:19:05.863 "enable_quickack": false, 00:19:05.863 "enable_placement_id": 0, 00:19:05.863 "enable_zerocopy_send_server": true, 00:19:05.863 "enable_zerocopy_send_client": false, 00:19:05.863 "zerocopy_threshold": 0, 00:19:05.863 "tls_version": 0, 00:19:05.863 "enable_ktls": false 00:19:05.863 } 00:19:05.863 } 00:19:05.863 ] 00:19:05.863 }, 00:19:05.863 { 00:19:05.863 "subsystem": "vmd", 00:19:05.863 "config": [] 00:19:05.863 }, 00:19:05.863 { 00:19:05.863 "subsystem": "accel", 00:19:05.863 "config": [ 00:19:05.863 { 00:19:05.863 "method": "accel_set_options", 00:19:05.863 "params": { 00:19:05.863 "small_cache_size": 128, 00:19:05.863 "large_cache_size": 16, 00:19:05.863 "task_count": 2048, 00:19:05.863 "sequence_count": 2048, 00:19:05.863 "buf_count": 2048 00:19:05.863 } 00:19:05.863 } 00:19:05.863 ] 00:19:05.863 }, 00:19:05.863 { 00:19:05.863 "subsystem": "bdev", 00:19:05.863 "config": [ 00:19:05.863 { 00:19:05.863 "method": "bdev_set_options", 00:19:05.863 "params": { 00:19:05.863 "bdev_io_pool_size": 65535, 00:19:05.863 "bdev_io_cache_size": 256, 00:19:05.863 "bdev_auto_examine": true, 00:19:05.863 "iobuf_small_cache_size": 128, 00:19:05.863 "iobuf_large_cache_size": 16 00:19:05.863 } 00:19:05.863 }, 00:19:05.863 { 00:19:05.863 "method": "bdev_raid_set_options", 00:19:05.863 "params": { 00:19:05.863 "process_window_size_kb": 1024, 00:19:05.863 "process_max_bandwidth_mb_sec": 0 00:19:05.863 } 00:19:05.863 }, 00:19:05.863 { 00:19:05.863 "method": "bdev_iscsi_set_options", 00:19:05.863 "params": { 00:19:05.863 "timeout_sec": 30 00:19:05.863 } 00:19:05.863 }, 00:19:05.863 { 00:19:05.863 "method": "bdev_nvme_set_options", 00:19:05.863 "params": { 00:19:05.863 "action_on_timeout": "none", 00:19:05.863 "timeout_us": 0, 00:19:05.863 "timeout_admin_us": 0, 00:19:05.863 "keep_alive_timeout_ms": 10000, 00:19:05.863 "arbitration_burst": 0, 00:19:05.863 "low_priority_weight": 0, 00:19:05.863 "medium_priority_weight": 0, 00:19:05.863 "high_priority_weight": 0, 00:19:05.863 "nvme_adminq_poll_period_us": 10000, 00:19:05.863 "nvme_ioq_poll_period_us": 0, 00:19:05.863 "io_queue_requests": 0, 00:19:05.863 "delay_cmd_submit": true, 00:19:05.863 "transport_retry_count": 4, 00:19:05.863 "bdev_retry_count": 3, 00:19:05.863 "transport_ack_timeout": 0, 00:19:05.863 "ctrlr_loss_timeout_sec": 0, 00:19:05.863 "reconnect_delay_sec": 0, 00:19:05.863 "fast_io_fail_timeout_sec": 0, 00:19:05.863 "disable_auto_failback": false, 00:19:05.863 "generate_uuids": false, 00:19:05.863 "transport_tos": 0, 00:19:05.863 "nvme_error_stat": false, 00:19:05.863 "rdma_srq_size": 0, 00:19:05.863 "io_path_stat": false, 00:19:05.863 "allow_accel_sequence": false, 00:19:05.863 "rdma_max_cq_size": 0, 00:19:05.863 "rdma_cm_event_timeout_ms": 0, 00:19:05.863 "dhchap_digests": [ 00:19:05.863 "sha256", 00:19:05.863 "sha384", 00:19:05.863 "sha512" 00:19:05.863 ], 00:19:05.863 "dhchap_dhgroups": [ 00:19:05.863 "null", 00:19:05.863 "ffdhe2048", 00:19:05.863 "ffdhe3072", 00:19:05.863 "ffdhe4096", 00:19:05.863 "ffdhe6144", 00:19:05.863 "ffdhe8192" 00:19:05.864 ] 00:19:05.864 } 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "method": "bdev_nvme_set_hotplug", 00:19:05.864 "params": { 00:19:05.864 "period_us": 100000, 00:19:05.864 "enable": false 00:19:05.864 } 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "method": "bdev_malloc_create", 00:19:05.864 "params": { 00:19:05.864 "name": "malloc0", 00:19:05.864 "num_blocks": 8192, 00:19:05.864 "block_size": 4096, 00:19:05.864 "physical_block_size": 4096, 00:19:05.864 "uuid": "ab1fc974-4bd0-4435-8453-1a73832600d3", 00:19:05.864 "optimal_io_boundary": 0, 00:19:05.864 "md_size": 0, 00:19:05.864 "dif_type": 0, 00:19:05.864 "dif_is_head_of_md": false, 00:19:05.864 "dif_pi_format": 0 00:19:05.864 } 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "method": "bdev_wait_for_examine" 00:19:05.864 } 00:19:05.864 ] 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "subsystem": "nbd", 00:19:05.864 "config": [] 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "subsystem": "scheduler", 00:19:05.864 "config": [ 00:19:05.864 { 00:19:05.864 "method": "framework_set_scheduler", 00:19:05.864 "params": { 00:19:05.864 "name": "static" 00:19:05.864 } 00:19:05.864 } 00:19:05.864 ] 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "subsystem": "nvmf", 00:19:05.864 "config": [ 00:19:05.864 { 00:19:05.864 "method": "nvmf_set_config", 00:19:05.864 "params": { 00:19:05.864 "discovery_filter": "match_any", 00:19:05.864 "admin_cmd_passthru": { 00:19:05.864 "identify_ctrlr": false 00:19:05.864 }, 00:19:05.864 "dhchap_digests": [ 00:19:05.864 "sha256", 00:19:05.864 "sha384", 00:19:05.864 "sha512" 00:19:05.864 ], 00:19:05.864 "dhchap_dhgroups": [ 00:19:05.864 "null", 00:19:05.864 "ffdhe2048", 00:19:05.864 "ffdhe3072", 00:19:05.864 "ffdhe4096", 00:19:05.864 "ffdhe6144", 00:19:05.864 "ffdhe8192" 00:19:05.864 ] 00:19:05.864 } 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "method": "nvmf_set_max_subsystems", 00:19:05.864 "params": { 00:19:05.864 "max_subsystems": 1024 00:19:05.864 } 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "method": "nvmf_set_crdt", 00:19:05.864 "params": { 00:19:05.864 "crdt1": 0, 00:19:05.864 "crdt2": 0, 00:19:05.864 "crdt3": 0 00:19:05.864 } 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "method": "nvmf_create_transport", 00:19:05.864 "params": { 00:19:05.864 "trtype": "TCP", 00:19:05.864 "max_queue_depth": 128, 00:19:05.864 "max_io_qpairs_per_ctrlr": 127, 00:19:05.864 "in_capsule_data_size": 4096, 00:19:05.864 "max_io_size": 131072, 00:19:05.864 "io_unit_size": 131072, 00:19:05.864 "max_aq_depth": 128, 00:19:05.864 "num_shared_buffers": 511, 00:19:05.864 "buf_cache_size": 4294967295, 00:19:05.864 "dif_insert_or_strip": false, 00:19:05.864 "zcopy": false, 00:19:05.864 "c2h_success": false, 00:19:05.864 "sock_priority": 0, 00:19:05.864 "abort_timeout_sec": 1, 00:19:05.864 "ack_timeout": 0, 00:19:05.864 "data_wr_pool_size": 0 00:19:05.864 } 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "method": "nvmf_create_subsystem", 00:19:05.864 "params": { 00:19:05.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.864 "allow_any_host": false, 00:19:05.864 "serial_number": "SPDK00000000000001", 00:19:05.864 "model_number": "SPDK bdev Controller", 00:19:05.864 "max_namespaces": 10, 00:19:05.864 "min_cntlid": 1, 00:19:05.864 "max_cntlid": 65519, 00:19:05.864 "ana_reporting": false 00:19:05.864 } 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "method": "nvmf_subsystem_add_host", 00:19:05.864 "params": { 00:19:05.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.864 "host": "nqn.2016-06.io.spdk:host1", 00:19:05.864 "psk": "key0" 00:19:05.864 } 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "method": "nvmf_subsystem_add_ns", 00:19:05.864 "params": { 00:19:05.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.864 "namespace": { 00:19:05.864 "nsid": 1, 00:19:05.864 "bdev_name": "malloc0", 00:19:05.864 "nguid": "AB1FC9744BD0443584531A73832600D3", 00:19:05.864 "uuid": "ab1fc974-4bd0-4435-8453-1a73832600d3", 00:19:05.864 "no_auto_visible": false 00:19:05.864 } 00:19:05.864 } 00:19:05.864 }, 00:19:05.864 { 00:19:05.864 "method": "nvmf_subsystem_add_listener", 00:19:05.864 "params": { 00:19:05.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.864 "listen_address": { 00:19:05.864 "trtype": "TCP", 00:19:05.864 "adrfam": "IPv4", 00:19:05.864 "traddr": "10.0.0.2", 00:19:05.864 "trsvcid": "4420" 00:19:05.864 }, 00:19:05.864 "secure_channel": true 00:19:05.864 } 00:19:05.864 } 00:19:05.864 ] 00:19:05.864 } 00:19:05.864 ] 00:19:05.864 }' 00:19:05.864 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:06.124 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:06.124 "subsystems": [ 00:19:06.124 { 00:19:06.124 "subsystem": "keyring", 00:19:06.124 "config": [ 00:19:06.124 { 00:19:06.124 "method": "keyring_file_add_key", 00:19:06.124 "params": { 00:19:06.124 "name": "key0", 00:19:06.124 "path": "/tmp/tmp.8NZE8H98Z0" 00:19:06.124 } 00:19:06.124 } 00:19:06.124 ] 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "subsystem": "iobuf", 00:19:06.124 "config": [ 00:19:06.124 { 00:19:06.124 "method": "iobuf_set_options", 00:19:06.124 "params": { 00:19:06.124 "small_pool_count": 8192, 00:19:06.124 "large_pool_count": 1024, 00:19:06.124 "small_bufsize": 8192, 00:19:06.124 "large_bufsize": 135168, 00:19:06.124 "enable_numa": false 00:19:06.124 } 00:19:06.124 } 00:19:06.124 ] 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "subsystem": "sock", 00:19:06.124 "config": [ 00:19:06.124 { 00:19:06.124 "method": "sock_set_default_impl", 00:19:06.124 "params": { 00:19:06.124 "impl_name": "posix" 00:19:06.124 } 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "method": "sock_impl_set_options", 00:19:06.124 "params": { 00:19:06.124 "impl_name": "ssl", 00:19:06.124 "recv_buf_size": 4096, 00:19:06.124 "send_buf_size": 4096, 00:19:06.124 "enable_recv_pipe": true, 00:19:06.124 "enable_quickack": false, 00:19:06.124 "enable_placement_id": 0, 00:19:06.124 "enable_zerocopy_send_server": true, 00:19:06.124 "enable_zerocopy_send_client": false, 00:19:06.124 "zerocopy_threshold": 0, 00:19:06.124 "tls_version": 0, 00:19:06.124 "enable_ktls": false 00:19:06.124 } 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "method": "sock_impl_set_options", 00:19:06.124 "params": { 00:19:06.124 "impl_name": "posix", 00:19:06.124 "recv_buf_size": 2097152, 00:19:06.124 "send_buf_size": 2097152, 00:19:06.124 "enable_recv_pipe": true, 00:19:06.124 "enable_quickack": false, 00:19:06.124 "enable_placement_id": 0, 00:19:06.124 "enable_zerocopy_send_server": true, 00:19:06.124 "enable_zerocopy_send_client": false, 00:19:06.124 "zerocopy_threshold": 0, 00:19:06.124 "tls_version": 0, 00:19:06.124 "enable_ktls": false 00:19:06.124 } 00:19:06.124 } 00:19:06.124 ] 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "subsystem": "vmd", 00:19:06.124 "config": [] 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "subsystem": "accel", 00:19:06.124 "config": [ 00:19:06.124 { 00:19:06.124 "method": "accel_set_options", 00:19:06.124 "params": { 00:19:06.124 "small_cache_size": 128, 00:19:06.124 "large_cache_size": 16, 00:19:06.124 "task_count": 2048, 00:19:06.124 "sequence_count": 2048, 00:19:06.124 "buf_count": 2048 00:19:06.124 } 00:19:06.124 } 00:19:06.124 ] 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "subsystem": "bdev", 00:19:06.124 "config": [ 00:19:06.124 { 00:19:06.124 "method": "bdev_set_options", 00:19:06.124 "params": { 00:19:06.124 "bdev_io_pool_size": 65535, 00:19:06.124 "bdev_io_cache_size": 256, 00:19:06.124 "bdev_auto_examine": true, 00:19:06.124 "iobuf_small_cache_size": 128, 00:19:06.124 "iobuf_large_cache_size": 16 00:19:06.124 } 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "method": "bdev_raid_set_options", 00:19:06.124 "params": { 00:19:06.124 "process_window_size_kb": 1024, 00:19:06.124 "process_max_bandwidth_mb_sec": 0 00:19:06.124 } 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "method": "bdev_iscsi_set_options", 00:19:06.124 "params": { 00:19:06.124 "timeout_sec": 30 00:19:06.124 } 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "method": "bdev_nvme_set_options", 00:19:06.124 "params": { 00:19:06.124 "action_on_timeout": "none", 00:19:06.124 "timeout_us": 0, 00:19:06.124 "timeout_admin_us": 0, 00:19:06.124 "keep_alive_timeout_ms": 10000, 00:19:06.124 "arbitration_burst": 0, 00:19:06.124 "low_priority_weight": 0, 00:19:06.124 "medium_priority_weight": 0, 00:19:06.124 "high_priority_weight": 0, 00:19:06.124 "nvme_adminq_poll_period_us": 10000, 00:19:06.124 "nvme_ioq_poll_period_us": 0, 00:19:06.124 "io_queue_requests": 512, 00:19:06.124 "delay_cmd_submit": true, 00:19:06.124 "transport_retry_count": 4, 00:19:06.124 "bdev_retry_count": 3, 00:19:06.124 "transport_ack_timeout": 0, 00:19:06.124 "ctrlr_loss_timeout_sec": 0, 00:19:06.124 "reconnect_delay_sec": 0, 00:19:06.124 "fast_io_fail_timeout_sec": 0, 00:19:06.124 "disable_auto_failback": false, 00:19:06.124 "generate_uuids": false, 00:19:06.124 "transport_tos": 0, 00:19:06.124 "nvme_error_stat": false, 00:19:06.124 "rdma_srq_size": 0, 00:19:06.124 "io_path_stat": false, 00:19:06.124 "allow_accel_sequence": false, 00:19:06.124 "rdma_max_cq_size": 0, 00:19:06.124 "rdma_cm_event_timeout_ms": 0, 00:19:06.124 "dhchap_digests": [ 00:19:06.124 "sha256", 00:19:06.124 "sha384", 00:19:06.124 "sha512" 00:19:06.124 ], 00:19:06.124 "dhchap_dhgroups": [ 00:19:06.124 "null", 00:19:06.124 "ffdhe2048", 00:19:06.124 "ffdhe3072", 00:19:06.124 "ffdhe4096", 00:19:06.124 "ffdhe6144", 00:19:06.124 "ffdhe8192" 00:19:06.124 ] 00:19:06.124 } 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "method": "bdev_nvme_attach_controller", 00:19:06.124 "params": { 00:19:06.124 "name": "TLSTEST", 00:19:06.124 "trtype": "TCP", 00:19:06.124 "adrfam": "IPv4", 00:19:06.124 "traddr": "10.0.0.2", 00:19:06.124 "trsvcid": "4420", 00:19:06.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.124 "prchk_reftag": false, 00:19:06.124 "prchk_guard": false, 00:19:06.124 "ctrlr_loss_timeout_sec": 0, 00:19:06.124 "reconnect_delay_sec": 0, 00:19:06.124 "fast_io_fail_timeout_sec": 0, 00:19:06.124 "psk": "key0", 00:19:06.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.124 "hdgst": false, 00:19:06.124 "ddgst": false, 00:19:06.124 "multipath": "multipath" 00:19:06.124 } 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "method": "bdev_nvme_set_hotplug", 00:19:06.124 "params": { 00:19:06.124 "period_us": 100000, 00:19:06.124 "enable": false 00:19:06.124 } 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "method": "bdev_wait_for_examine" 00:19:06.124 } 00:19:06.124 ] 00:19:06.124 }, 00:19:06.124 { 00:19:06.124 "subsystem": "nbd", 00:19:06.124 "config": [] 00:19:06.124 } 00:19:06.124 ] 00:19:06.124 }' 00:19:06.124 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1922756 00:19:06.124 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1922756 ']' 00:19:06.124 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1922756 00:19:06.124 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:06.124 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.124 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1922756 00:19:06.124 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:06.124 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:06.124 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1922756' 00:19:06.124 killing process with pid 1922756 00:19:06.124 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1922756 00:19:06.124 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.124 00:19:06.124 Latency(us) 00:19:06.124 [2024-12-09T16:29:32.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.125 [2024-12-09T16:29:32.665Z] =================================================================================================================== 00:19:06.125 [2024-12-09T16:29:32.665Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:06.125 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1922756 00:19:06.383 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1922381 00:19:06.383 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1922381 ']' 00:19:06.383 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1922381 00:19:06.383 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:06.383 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.383 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1922381 00:19:06.383 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:06.383 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:06.383 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1922381' 00:19:06.383 killing process with pid 1922381 00:19:06.383 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1922381 00:19:06.383 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1922381 00:19:06.641 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:06.641 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.641 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.641 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.641 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:06.641 "subsystems": [ 00:19:06.641 { 00:19:06.641 "subsystem": "keyring", 00:19:06.641 "config": [ 00:19:06.641 { 00:19:06.641 "method": "keyring_file_add_key", 00:19:06.641 "params": { 00:19:06.641 "name": "key0", 00:19:06.641 "path": "/tmp/tmp.8NZE8H98Z0" 00:19:06.641 } 00:19:06.641 } 00:19:06.641 ] 00:19:06.641 }, 00:19:06.641 { 00:19:06.641 "subsystem": "iobuf", 00:19:06.641 "config": [ 00:19:06.641 { 00:19:06.641 "method": "iobuf_set_options", 00:19:06.641 "params": { 00:19:06.641 "small_pool_count": 8192, 00:19:06.641 "large_pool_count": 1024, 00:19:06.641 "small_bufsize": 8192, 00:19:06.641 "large_bufsize": 135168, 00:19:06.641 "enable_numa": false 00:19:06.641 } 00:19:06.641 } 00:19:06.641 ] 00:19:06.641 }, 00:19:06.641 { 00:19:06.641 "subsystem": "sock", 00:19:06.641 "config": [ 00:19:06.641 { 00:19:06.641 "method": "sock_set_default_impl", 00:19:06.641 "params": { 00:19:06.641 "impl_name": "posix" 00:19:06.641 } 00:19:06.641 }, 00:19:06.641 { 00:19:06.641 "method": "sock_impl_set_options", 00:19:06.641 "params": { 00:19:06.641 "impl_name": "ssl", 00:19:06.641 "recv_buf_size": 4096, 00:19:06.641 "send_buf_size": 4096, 00:19:06.641 "enable_recv_pipe": true, 00:19:06.641 "enable_quickack": false, 00:19:06.641 "enable_placement_id": 0, 00:19:06.641 "enable_zerocopy_send_server": true, 00:19:06.641 "enable_zerocopy_send_client": false, 00:19:06.641 "zerocopy_threshold": 0, 00:19:06.641 "tls_version": 0, 00:19:06.641 "enable_ktls": false 00:19:06.641 } 00:19:06.641 }, 00:19:06.641 { 00:19:06.641 "method": "sock_impl_set_options", 00:19:06.641 "params": { 00:19:06.641 "impl_name": "posix", 00:19:06.641 "recv_buf_size": 2097152, 00:19:06.641 "send_buf_size": 2097152, 00:19:06.641 "enable_recv_pipe": true, 00:19:06.641 "enable_quickack": false, 00:19:06.641 "enable_placement_id": 0, 00:19:06.641 "enable_zerocopy_send_server": true, 00:19:06.641 "enable_zerocopy_send_client": false, 00:19:06.641 "zerocopy_threshold": 0, 00:19:06.641 "tls_version": 0, 00:19:06.641 "enable_ktls": false 00:19:06.641 } 00:19:06.641 } 00:19:06.641 ] 00:19:06.641 }, 00:19:06.641 { 00:19:06.641 "subsystem": "vmd", 00:19:06.641 "config": [] 00:19:06.641 }, 00:19:06.641 { 00:19:06.641 "subsystem": "accel", 00:19:06.641 "config": [ 00:19:06.641 { 00:19:06.641 "method": "accel_set_options", 00:19:06.641 "params": { 00:19:06.641 "small_cache_size": 128, 00:19:06.641 "large_cache_size": 16, 00:19:06.641 "task_count": 2048, 00:19:06.641 "sequence_count": 2048, 00:19:06.641 "buf_count": 2048 00:19:06.641 } 00:19:06.641 } 00:19:06.641 ] 00:19:06.641 }, 00:19:06.641 { 00:19:06.641 "subsystem": "bdev", 00:19:06.641 "config": [ 00:19:06.641 { 00:19:06.641 "method": "bdev_set_options", 00:19:06.641 "params": { 00:19:06.641 "bdev_io_pool_size": 65535, 00:19:06.641 "bdev_io_cache_size": 256, 00:19:06.641 "bdev_auto_examine": true, 00:19:06.641 "iobuf_small_cache_size": 128, 00:19:06.641 "iobuf_large_cache_size": 16 00:19:06.641 } 00:19:06.641 }, 00:19:06.641 { 00:19:06.641 "method": "bdev_raid_set_options", 00:19:06.641 "params": { 00:19:06.641 "process_window_size_kb": 1024, 00:19:06.641 "process_max_bandwidth_mb_sec": 0 00:19:06.641 } 00:19:06.641 }, 00:19:06.641 { 00:19:06.641 "method": "bdev_iscsi_set_options", 00:19:06.641 "params": { 00:19:06.641 "timeout_sec": 30 00:19:06.641 } 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "method": "bdev_nvme_set_options", 00:19:06.642 "params": { 00:19:06.642 "action_on_timeout": "none", 00:19:06.642 "timeout_us": 0, 00:19:06.642 "timeout_admin_us": 0, 00:19:06.642 "keep_alive_timeout_ms": 10000, 00:19:06.642 "arbitration_burst": 0, 00:19:06.642 "low_priority_weight": 0, 00:19:06.642 "medium_priority_weight": 0, 00:19:06.642 "high_priority_weight": 0, 00:19:06.642 "nvme_adminq_poll_period_us": 10000, 00:19:06.642 "nvme_ioq_poll_period_us": 0, 00:19:06.642 "io_queue_requests": 0, 00:19:06.642 "delay_cmd_submit": true, 00:19:06.642 "transport_retry_count": 4, 00:19:06.642 "bdev_retry_count": 3, 00:19:06.642 "transport_ack_timeout": 0, 00:19:06.642 "ctrlr_loss_timeout_sec": 0, 00:19:06.642 "reconnect_delay_sec": 0, 00:19:06.642 "fast_io_fail_timeout_sec": 0, 00:19:06.642 "disable_auto_failback": false, 00:19:06.642 "generate_uuids": false, 00:19:06.642 "transport_tos": 0, 00:19:06.642 "nvme_error_stat": false, 00:19:06.642 "rdma_srq_size": 0, 00:19:06.642 "io_path_stat": false, 00:19:06.642 "allow_accel_sequence": false, 00:19:06.642 "rdma_max_cq_size": 0, 00:19:06.642 "rdma_cm_event_timeout_ms": 0, 00:19:06.642 "dhchap_digests": [ 00:19:06.642 "sha256", 00:19:06.642 "sha384", 00:19:06.642 "sha512" 00:19:06.642 ], 00:19:06.642 "dhchap_dhgroups": [ 00:19:06.642 "null", 00:19:06.642 "ffdhe2048", 00:19:06.642 "ffdhe3072", 00:19:06.642 "ffdhe4096", 00:19:06.642 "ffdhe6144", 00:19:06.642 "ffdhe8192" 00:19:06.642 ] 00:19:06.642 } 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "method": "bdev_nvme_set_hotplug", 00:19:06.642 "params": { 00:19:06.642 "period_us": 100000, 00:19:06.642 "enable": false 00:19:06.642 } 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "method": "bdev_malloc_create", 00:19:06.642 "params": { 00:19:06.642 "name": "malloc0", 00:19:06.642 "num_blocks": 8192, 00:19:06.642 "block_size": 4096, 00:19:06.642 "physical_block_size": 4096, 00:19:06.642 "uuid": "ab1fc974-4bd0-4435-8453-1a73832600d3", 00:19:06.642 "optimal_io_boundary": 0, 00:19:06.642 "md_size": 0, 00:19:06.642 "dif_type": 0, 00:19:06.642 "dif_is_head_of_md": false, 00:19:06.642 "dif_pi_format": 0 00:19:06.642 } 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "method": "bdev_wait_for_examine" 00:19:06.642 } 00:19:06.642 ] 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "subsystem": "nbd", 00:19:06.642 "config": [] 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "subsystem": "scheduler", 00:19:06.642 "config": [ 00:19:06.642 { 00:19:06.642 "method": "framework_set_scheduler", 00:19:06.642 "params": { 00:19:06.642 "name": "static" 00:19:06.642 } 00:19:06.642 } 00:19:06.642 ] 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "subsystem": "nvmf", 00:19:06.642 "config": [ 00:19:06.642 { 00:19:06.642 "method": "nvmf_set_config", 00:19:06.642 "params": { 00:19:06.642 "discovery_filter": "match_any", 00:19:06.642 "admin_cmd_passthru": { 00:19:06.642 "identify_ctrlr": false 00:19:06.642 }, 00:19:06.642 "dhchap_digests": [ 00:19:06.642 "sha256", 00:19:06.642 "sha384", 00:19:06.642 "sha512" 00:19:06.642 ], 00:19:06.642 "dhchap_dhgroups": [ 00:19:06.642 "null", 00:19:06.642 "ffdhe2048", 00:19:06.642 "ffdhe3072", 00:19:06.642 "ffdhe4096", 00:19:06.642 "ffdhe6144", 00:19:06.642 "ffdhe8192" 00:19:06.642 ] 00:19:06.642 } 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "method": "nvmf_set_max_subsystems", 00:19:06.642 "params": { 00:19:06.642 "max_subsystems": 1024 00:19:06.642 } 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "method": "nvmf_set_crdt", 00:19:06.642 "params": { 00:19:06.642 "crdt1": 0, 00:19:06.642 "crdt2": 0, 00:19:06.642 "crdt3": 0 00:19:06.642 } 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "method": "nvmf_create_transport", 00:19:06.642 "params": { 00:19:06.642 "trtype": "TCP", 00:19:06.642 "max_queue_depth": 128, 00:19:06.642 "max_io_qpairs_per_ctrlr": 127, 00:19:06.642 "in_capsule_data_size": 4096, 00:19:06.642 "max_io_size": 131072, 00:19:06.642 "io_unit_size": 131072, 00:19:06.642 "max_aq_depth": 128, 00:19:06.642 "num_shared_buffers": 511, 00:19:06.642 "buf_cache_size": 4294967295, 00:19:06.642 "dif_insert_or_strip": false, 00:19:06.642 "zcopy": false, 00:19:06.642 "c2h_success": false, 00:19:06.642 "sock_priority": 0, 00:19:06.642 "abort_timeout_sec": 1, 00:19:06.642 "ack_timeout": 0, 00:19:06.642 "data_wr_pool_size": 0 00:19:06.642 } 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "method": "nvmf_create_subsystem", 00:19:06.642 "params": { 00:19:06.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.642 "allow_any_host": false, 00:19:06.642 "serial_number": "SPDK00000000000001", 00:19:06.642 "model_number": "SPDK bdev Controller", 00:19:06.642 "max_namespaces": 10, 00:19:06.642 "min_cntlid": 1, 00:19:06.642 "max_cntlid": 65519, 00:19:06.642 "ana_reporting": false 00:19:06.642 } 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "method": "nvmf_subsystem_add_host", 00:19:06.642 "params": { 00:19:06.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.642 "host": "nqn.2016-06.io.spdk:host1", 00:19:06.642 "psk": "key0" 00:19:06.642 } 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "method": "nvmf_subsystem_add_ns", 00:19:06.642 "params": { 00:19:06.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.642 "namespace": { 00:19:06.642 "nsid": 1, 00:19:06.642 "bdev_name": "malloc0", 00:19:06.642 "nguid": "AB1FC9744BD0443584531A73832600D3", 00:19:06.642 "uuid": "ab1fc974-4bd0-4435-8453-1a73832600d3", 00:19:06.642 "no_auto_visible": false 00:19:06.642 } 00:19:06.642 } 00:19:06.642 }, 00:19:06.642 { 00:19:06.642 "method": "nvmf_subsystem_add_listener", 00:19:06.642 "params": { 00:19:06.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.642 "listen_address": { 00:19:06.642 "trtype": "TCP", 00:19:06.642 "adrfam": "IPv4", 00:19:06.642 "traddr": "10.0.0.2", 00:19:06.642 "trsvcid": "4420" 00:19:06.642 }, 00:19:06.642 "secure_channel": true 00:19:06.642 } 00:19:06.642 } 00:19:06.642 ] 00:19:06.642 } 00:19:06.642 ] 00:19:06.642 }' 00:19:06.642 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1923076 00:19:06.642 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:06.642 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1923076 00:19:06.642 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1923076 ']' 00:19:06.642 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.642 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.642 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.642 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.642 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.642 [2024-12-09 17:29:33.060474] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:06.642 [2024-12-09 17:29:33.060519] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.642 [2024-12-09 17:29:33.136295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.642 [2024-12-09 17:29:33.175368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.642 [2024-12-09 17:29:33.175404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.642 [2024-12-09 17:29:33.175410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.642 [2024-12-09 17:29:33.175416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.642 [2024-12-09 17:29:33.175421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.642 [2024-12-09 17:29:33.175905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.901 [2024-12-09 17:29:33.387119] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.901 [2024-12-09 17:29:33.419149] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.901 [2024-12-09 17:29:33.419335] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1923127 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1923127 /var/tmp/bdevperf.sock 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1923127 ']' 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.467 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:07.467 "subsystems": [ 00:19:07.467 { 00:19:07.468 "subsystem": "keyring", 00:19:07.468 "config": [ 00:19:07.468 { 00:19:07.468 "method": "keyring_file_add_key", 00:19:07.468 "params": { 00:19:07.468 "name": "key0", 00:19:07.468 "path": "/tmp/tmp.8NZE8H98Z0" 00:19:07.468 } 00:19:07.468 } 00:19:07.468 ] 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "subsystem": "iobuf", 00:19:07.468 "config": [ 00:19:07.468 { 00:19:07.468 "method": "iobuf_set_options", 00:19:07.468 "params": { 00:19:07.468 "small_pool_count": 8192, 00:19:07.468 "large_pool_count": 1024, 00:19:07.468 "small_bufsize": 8192, 00:19:07.468 "large_bufsize": 135168, 00:19:07.468 "enable_numa": false 00:19:07.468 } 00:19:07.468 } 00:19:07.468 ] 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "subsystem": "sock", 00:19:07.468 "config": [ 00:19:07.468 { 00:19:07.468 "method": "sock_set_default_impl", 00:19:07.468 "params": { 00:19:07.468 "impl_name": "posix" 00:19:07.468 } 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "method": "sock_impl_set_options", 00:19:07.468 "params": { 00:19:07.468 "impl_name": "ssl", 00:19:07.468 "recv_buf_size": 4096, 00:19:07.468 "send_buf_size": 4096, 00:19:07.468 "enable_recv_pipe": true, 00:19:07.468 "enable_quickack": false, 00:19:07.468 "enable_placement_id": 0, 00:19:07.468 "enable_zerocopy_send_server": true, 00:19:07.468 "enable_zerocopy_send_client": false, 00:19:07.468 "zerocopy_threshold": 0, 00:19:07.468 "tls_version": 0, 00:19:07.468 "enable_ktls": false 00:19:07.468 } 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "method": "sock_impl_set_options", 00:19:07.468 "params": { 00:19:07.468 "impl_name": "posix", 00:19:07.468 "recv_buf_size": 2097152, 00:19:07.468 "send_buf_size": 2097152, 00:19:07.468 "enable_recv_pipe": true, 00:19:07.468 "enable_quickack": false, 00:19:07.468 "enable_placement_id": 0, 00:19:07.468 "enable_zerocopy_send_server": true, 00:19:07.468 "enable_zerocopy_send_client": false, 00:19:07.468 "zerocopy_threshold": 0, 00:19:07.468 "tls_version": 0, 00:19:07.468 "enable_ktls": false 00:19:07.468 } 00:19:07.468 } 00:19:07.468 ] 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "subsystem": "vmd", 00:19:07.468 "config": [] 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "subsystem": "accel", 00:19:07.468 "config": [ 00:19:07.468 { 00:19:07.468 "method": "accel_set_options", 00:19:07.468 "params": { 00:19:07.468 "small_cache_size": 128, 00:19:07.468 "large_cache_size": 16, 00:19:07.468 "task_count": 2048, 00:19:07.468 "sequence_count": 2048, 00:19:07.468 "buf_count": 2048 00:19:07.468 } 00:19:07.468 } 00:19:07.468 ] 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "subsystem": "bdev", 00:19:07.468 "config": [ 00:19:07.468 { 00:19:07.468 "method": "bdev_set_options", 00:19:07.468 "params": { 00:19:07.468 "bdev_io_pool_size": 65535, 00:19:07.468 "bdev_io_cache_size": 256, 00:19:07.468 "bdev_auto_examine": true, 00:19:07.468 "iobuf_small_cache_size": 128, 00:19:07.468 "iobuf_large_cache_size": 16 00:19:07.468 } 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "method": "bdev_raid_set_options", 00:19:07.468 "params": { 00:19:07.468 "process_window_size_kb": 1024, 00:19:07.468 "process_max_bandwidth_mb_sec": 0 00:19:07.468 } 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "method": "bdev_iscsi_set_options", 00:19:07.468 "params": { 00:19:07.468 "timeout_sec": 30 00:19:07.468 } 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "method": "bdev_nvme_set_options", 00:19:07.468 "params": { 00:19:07.468 "action_on_timeout": "none", 00:19:07.468 "timeout_us": 0, 00:19:07.468 "timeout_admin_us": 0, 00:19:07.468 "keep_alive_timeout_ms": 10000, 00:19:07.468 "arbitration_burst": 0, 00:19:07.468 "low_priority_weight": 0, 00:19:07.468 "medium_priority_weight": 0, 00:19:07.468 "high_priority_weight": 0, 00:19:07.468 "nvme_adminq_poll_period_us": 10000, 00:19:07.468 "nvme_ioq_poll_period_us": 0, 00:19:07.468 "io_queue_requests": 512, 00:19:07.468 "delay_cmd_submit": true, 00:19:07.468 "transport_retry_count": 4, 00:19:07.468 "bdev_retry_count": 3, 00:19:07.468 "transport_ack_timeout": 0, 00:19:07.468 "ctrlr_loss_timeout_sec": 0, 00:19:07.468 "reconnect_delay_sec": 0, 00:19:07.468 "fast_io_fail_timeout_sec": 0, 00:19:07.468 "disable_auto_failback": false, 00:19:07.468 "generate_uuids": false, 00:19:07.468 "transport_tos": 0, 00:19:07.468 "nvme_error_stat": false, 00:19:07.468 "rdma_srq_size": 0, 00:19:07.468 "io_path_stat": false, 00:19:07.468 "allow_accel_sequence": false, 00:19:07.468 "rdma_max_cq_size": 0, 00:19:07.468 "rdma_cm_event_timeout_ms": 0, 00:19:07.468 "dhchap_digests": [ 00:19:07.468 "sha256", 00:19:07.468 "sha384", 00:19:07.468 "sha512" 00:19:07.468 ], 00:19:07.468 "dhchap_dhgroups": [ 00:19:07.468 "null", 00:19:07.468 "ffdhe2048", 00:19:07.468 "ffdhe3072", 00:19:07.468 "ffdhe4096", 00:19:07.468 "ffdhe6144", 00:19:07.468 "ffdhe8192" 00:19:07.468 ] 00:19:07.468 } 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "method": "bdev_nvme_attach_controller", 00:19:07.468 "params": { 00:19:07.468 "name": "TLSTEST", 00:19:07.468 "trtype": "TCP", 00:19:07.468 "adrfam": "IPv4", 00:19:07.468 "traddr": "10.0.0.2", 00:19:07.468 "trsvcid": "4420", 00:19:07.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.468 "prchk_reftag": false, 00:19:07.468 "prchk_guard": false, 00:19:07.468 "ctrlr_loss_timeout_sec": 0, 00:19:07.468 "reconnect_delay_sec": 0, 00:19:07.468 "fast_io_fail_timeout_sec": 0, 00:19:07.468 "psk": "key0", 00:19:07.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.468 "hdgst": false, 00:19:07.468 "ddgst": false, 00:19:07.468 "multipath": "multipath" 00:19:07.468 } 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "method": "bdev_nvme_set_hotplug", 00:19:07.468 "params": { 00:19:07.468 "period_us": 100000, 00:19:07.468 "enable": false 00:19:07.468 } 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "method": "bdev_wait_for_examine" 00:19:07.468 } 00:19:07.468 ] 00:19:07.468 }, 00:19:07.468 { 00:19:07.468 "subsystem": "nbd", 00:19:07.468 "config": [] 00:19:07.468 } 00:19:07.468 ] 00:19:07.468 }' 00:19:07.468 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.468 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.468 [2024-12-09 17:29:33.962248] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:07.468 [2024-12-09 17:29:33.962297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923127 ] 00:19:07.727 [2024-12-09 17:29:34.018635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.727 [2024-12-09 17:29:34.060095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.727 [2024-12-09 17:29:34.213961] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:08.294 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.294 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:08.294 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:08.553 Running I/O for 10 seconds... 00:19:10.425 5409.00 IOPS, 21.13 MiB/s [2024-12-09T16:29:37.901Z] 5486.50 IOPS, 21.43 MiB/s [2024-12-09T16:29:39.279Z] 5521.33 IOPS, 21.57 MiB/s [2024-12-09T16:29:40.215Z] 5551.50 IOPS, 21.69 MiB/s [2024-12-09T16:29:41.151Z] 5569.00 IOPS, 21.75 MiB/s [2024-12-09T16:29:42.088Z] 5569.00 IOPS, 21.75 MiB/s [2024-12-09T16:29:43.024Z] 5569.00 IOPS, 21.75 MiB/s [2024-12-09T16:29:43.961Z] 5570.62 IOPS, 21.76 MiB/s [2024-12-09T16:29:45.338Z] 5576.22 IOPS, 21.78 MiB/s [2024-12-09T16:29:45.338Z] 5583.90 IOPS, 21.81 MiB/s 00:19:18.798 Latency(us) 00:19:18.798 [2024-12-09T16:29:45.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.798 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:18.798 Verification LBA range: start 0x0 length 0x2000 00:19:18.798 TLSTESTn1 : 10.02 5586.73 21.82 0.00 0.00 22875.30 5305.30 30583.47 00:19:18.798 [2024-12-09T16:29:45.338Z] =================================================================================================================== 00:19:18.798 [2024-12-09T16:29:45.338Z] Total : 5586.73 21.82 0.00 0.00 22875.30 5305.30 30583.47 00:19:18.798 { 00:19:18.798 "results": [ 00:19:18.798 { 00:19:18.798 "job": "TLSTESTn1", 00:19:18.798 "core_mask": "0x4", 00:19:18.798 "workload": "verify", 00:19:18.798 "status": "finished", 00:19:18.798 "verify_range": { 00:19:18.798 "start": 0, 00:19:18.798 "length": 8192 00:19:18.798 }, 00:19:18.798 "queue_depth": 128, 00:19:18.798 "io_size": 4096, 00:19:18.798 "runtime": 10.017302, 00:19:18.798 "iops": 5586.733833121933, 00:19:18.798 "mibps": 21.82317903563255, 00:19:18.798 "io_failed": 0, 00:19:18.798 "io_timeout": 0, 00:19:18.798 "avg_latency_us": 22875.301308119848, 00:19:18.798 "min_latency_us": 5305.295238095238, 00:19:18.798 "max_latency_us": 30583.466666666667 00:19:18.798 } 00:19:18.798 ], 00:19:18.798 "core_count": 1 00:19:18.798 } 00:19:18.798 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.798 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1923127 00:19:18.798 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1923127 ']' 00:19:18.798 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1923127 00:19:18.798 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:18.798 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.798 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1923127 00:19:18.798 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:18.798 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:18.798 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1923127' 00:19:18.798 killing process with pid 1923127 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1923127 00:19:18.799 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.799 00:19:18.799 Latency(us) 00:19:18.799 [2024-12-09T16:29:45.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.799 [2024-12-09T16:29:45.339Z] =================================================================================================================== 00:19:18.799 [2024-12-09T16:29:45.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1923127 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1923076 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1923076 ']' 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1923076 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1923076 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1923076' 00:19:18.799 killing process with pid 1923076 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1923076 00:19:18.799 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1923076 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1925107 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1925107 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1925107 ']' 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.057 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.057 [2024-12-09 17:29:45.447046] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:19.057 [2024-12-09 17:29:45.447092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.057 [2024-12-09 17:29:45.521311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.057 [2024-12-09 17:29:45.560057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.057 [2024-12-09 17:29:45.560091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.057 [2024-12-09 17:29:45.560101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.057 [2024-12-09 17:29:45.560107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.057 [2024-12-09 17:29:45.560111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.057 [2024-12-09 17:29:45.560596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.316 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.316 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:19.316 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.316 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.316 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.316 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.316 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.8NZE8H98Z0 00:19:19.316 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8NZE8H98Z0 00:19:19.316 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:19.575 [2024-12-09 17:29:45.857238] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.575 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:19.575 17:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:19.834 [2024-12-09 17:29:46.250246] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.834 [2024-12-09 17:29:46.250441] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.834 17:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:20.092 malloc0 00:19:20.092 17:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:20.351 17:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8NZE8H98Z0 00:19:20.610 17:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.610 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:20.610 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1925363 00:19:20.610 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.610 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1925363 /var/tmp/bdevperf.sock 00:19:20.610 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1925363 ']' 00:19:20.610 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.610 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.610 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.610 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.610 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.610 [2024-12-09 17:29:47.131142] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:20.610 [2024-12-09 17:29:47.131208] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1925363 ] 00:19:20.868 [2024-12-09 17:29:47.206625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.868 [2024-12-09 17:29:47.245776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.868 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.868 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:20.868 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8NZE8H98Z0 00:19:21.127 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:21.386 [2024-12-09 17:29:47.690137] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.386 nvme0n1 00:19:21.386 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:21.386 Running I/O for 1 seconds... 00:19:22.763 5403.00 IOPS, 21.11 MiB/s 00:19:22.763 Latency(us) 00:19:22.763 [2024-12-09T16:29:49.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.763 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:22.763 Verification LBA range: start 0x0 length 0x2000 00:19:22.763 nvme0n1 : 1.02 5439.04 21.25 0.00 0.00 23349.25 4837.18 21845.33 00:19:22.763 [2024-12-09T16:29:49.303Z] =================================================================================================================== 00:19:22.763 [2024-12-09T16:29:49.303Z] Total : 5439.04 21.25 0.00 0.00 23349.25 4837.18 21845.33 00:19:22.763 { 00:19:22.763 "results": [ 00:19:22.763 { 00:19:22.763 "job": "nvme0n1", 00:19:22.763 "core_mask": "0x2", 00:19:22.763 "workload": "verify", 00:19:22.763 "status": "finished", 00:19:22.763 "verify_range": { 00:19:22.763 "start": 0, 00:19:22.763 "length": 8192 00:19:22.763 }, 00:19:22.763 "queue_depth": 128, 00:19:22.763 "io_size": 4096, 00:19:22.763 "runtime": 1.016908, 00:19:22.763 "iops": 5439.0367663544785, 00:19:22.763 "mibps": 21.24623736857218, 00:19:22.763 "io_failed": 0, 00:19:22.763 "io_timeout": 0, 00:19:22.763 "avg_latency_us": 23349.25450628923, 00:19:22.763 "min_latency_us": 4837.1809523809525, 00:19:22.763 "max_latency_us": 21845.333333333332 00:19:22.763 } 00:19:22.763 ], 00:19:22.763 "core_count": 1 00:19:22.763 } 00:19:22.763 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1925363 00:19:22.763 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1925363 ']' 00:19:22.763 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1925363 00:19:22.763 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:22.763 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.763 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1925363 00:19:22.763 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:22.763 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:22.763 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1925363' 00:19:22.763 killing process with pid 1925363 00:19:22.763 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1925363 00:19:22.763 Received shutdown signal, test time was about 1.000000 seconds 00:19:22.763 00:19:22.763 Latency(us) 00:19:22.763 [2024-12-09T16:29:49.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.763 [2024-12-09T16:29:49.303Z] =================================================================================================================== 00:19:22.764 [2024-12-09T16:29:49.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:22.764 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1925363 00:19:22.764 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1925107 00:19:22.764 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1925107 ']' 00:19:22.764 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1925107 00:19:22.764 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:22.764 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.764 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1925107 00:19:22.764 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.764 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.764 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1925107' 00:19:22.764 killing process with pid 1925107 00:19:22.764 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1925107 00:19:22.764 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1925107 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1925702 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1925702 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1925702 ']' 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.023 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.023 [2024-12-09 17:29:49.400624] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:23.023 [2024-12-09 17:29:49.400670] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.023 [2024-12-09 17:29:49.478156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.023 [2024-12-09 17:29:49.514766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.023 [2024-12-09 17:29:49.514800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.023 [2024-12-09 17:29:49.514810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.023 [2024-12-09 17:29:49.514815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.023 [2024-12-09 17:29:49.514821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.023 [2024-12-09 17:29:49.515313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.282 [2024-12-09 17:29:49.659332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.282 malloc0 00:19:23.282 [2024-12-09 17:29:49.687413] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:23.282 [2024-12-09 17:29:49.687612] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1925838 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1925838 /var/tmp/bdevperf.sock 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1925838 ']' 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.282 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.282 [2024-12-09 17:29:49.762662] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:23.282 [2024-12-09 17:29:49.762704] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1925838 ] 00:19:23.540 [2024-12-09 17:29:49.836091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.540 [2024-12-09 17:29:49.876511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.540 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.540 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:23.540 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8NZE8H98Z0 00:19:23.799 17:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:23.799 [2024-12-09 17:29:50.336223] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.058 nvme0n1 00:19:24.058 17:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:24.058 Running I/O for 1 seconds... 00:19:25.252 5360.00 IOPS, 20.94 MiB/s 00:19:25.252 Latency(us) 00:19:25.252 [2024-12-09T16:29:51.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.252 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:25.252 Verification LBA range: start 0x0 length 0x2000 00:19:25.252 nvme0n1 : 1.01 5420.44 21.17 0.00 0.00 23464.14 4805.97 43690.67 00:19:25.252 [2024-12-09T16:29:51.792Z] =================================================================================================================== 00:19:25.252 [2024-12-09T16:29:51.792Z] Total : 5420.44 21.17 0.00 0.00 23464.14 4805.97 43690.67 00:19:25.252 { 00:19:25.252 "results": [ 00:19:25.252 { 00:19:25.252 "job": "nvme0n1", 00:19:25.252 "core_mask": "0x2", 00:19:25.252 "workload": "verify", 00:19:25.252 "status": "finished", 00:19:25.252 "verify_range": { 00:19:25.252 "start": 0, 00:19:25.252 "length": 8192 00:19:25.252 }, 00:19:25.252 "queue_depth": 128, 00:19:25.252 "io_size": 4096, 00:19:25.252 "runtime": 1.012463, 00:19:25.252 "iops": 5420.444994039289, 00:19:25.252 "mibps": 21.17361325796597, 00:19:25.252 "io_failed": 0, 00:19:25.252 "io_timeout": 0, 00:19:25.252 "avg_latency_us": 23464.143273635986, 00:19:25.252 "min_latency_us": 4805.973333333333, 00:19:25.252 "max_latency_us": 43690.666666666664 00:19:25.252 } 00:19:25.252 ], 00:19:25.252 "core_count": 1 00:19:25.252 } 00:19:25.252 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:25.252 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.252 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.252 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.252 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:25.252 "subsystems": [ 00:19:25.252 { 00:19:25.252 "subsystem": "keyring", 00:19:25.252 "config": [ 00:19:25.252 { 00:19:25.252 "method": "keyring_file_add_key", 00:19:25.252 "params": { 00:19:25.252 "name": "key0", 00:19:25.252 "path": "/tmp/tmp.8NZE8H98Z0" 00:19:25.252 } 00:19:25.252 } 00:19:25.252 ] 00:19:25.252 }, 00:19:25.252 { 00:19:25.252 "subsystem": "iobuf", 00:19:25.252 "config": [ 00:19:25.252 { 00:19:25.252 "method": "iobuf_set_options", 00:19:25.252 "params": { 00:19:25.252 "small_pool_count": 8192, 00:19:25.252 "large_pool_count": 1024, 00:19:25.252 "small_bufsize": 8192, 00:19:25.252 "large_bufsize": 135168, 00:19:25.252 "enable_numa": false 00:19:25.252 } 00:19:25.252 } 00:19:25.252 ] 00:19:25.252 }, 00:19:25.252 { 00:19:25.252 "subsystem": "sock", 00:19:25.252 "config": [ 00:19:25.252 { 00:19:25.252 "method": "sock_set_default_impl", 00:19:25.252 "params": { 00:19:25.252 "impl_name": "posix" 00:19:25.252 } 00:19:25.252 }, 00:19:25.252 { 00:19:25.252 "method": "sock_impl_set_options", 00:19:25.252 "params": { 00:19:25.252 "impl_name": "ssl", 00:19:25.252 "recv_buf_size": 4096, 00:19:25.252 "send_buf_size": 4096, 00:19:25.252 "enable_recv_pipe": true, 00:19:25.252 "enable_quickack": false, 00:19:25.252 "enable_placement_id": 0, 00:19:25.252 "enable_zerocopy_send_server": true, 00:19:25.252 "enable_zerocopy_send_client": false, 00:19:25.252 "zerocopy_threshold": 0, 00:19:25.252 "tls_version": 0, 00:19:25.252 "enable_ktls": false 00:19:25.252 } 00:19:25.252 }, 00:19:25.252 { 00:19:25.252 "method": "sock_impl_set_options", 00:19:25.252 "params": { 00:19:25.252 "impl_name": "posix", 00:19:25.252 "recv_buf_size": 2097152, 00:19:25.252 "send_buf_size": 2097152, 00:19:25.252 "enable_recv_pipe": true, 00:19:25.252 "enable_quickack": false, 00:19:25.252 "enable_placement_id": 0, 00:19:25.252 "enable_zerocopy_send_server": true, 00:19:25.252 "enable_zerocopy_send_client": false, 00:19:25.252 "zerocopy_threshold": 0, 00:19:25.252 "tls_version": 0, 00:19:25.252 "enable_ktls": false 00:19:25.252 } 00:19:25.252 } 00:19:25.252 ] 00:19:25.252 }, 00:19:25.252 { 00:19:25.252 "subsystem": "vmd", 00:19:25.252 "config": [] 00:19:25.252 }, 00:19:25.252 { 00:19:25.252 "subsystem": "accel", 00:19:25.252 "config": [ 00:19:25.252 { 00:19:25.252 "method": "accel_set_options", 00:19:25.252 "params": { 00:19:25.252 "small_cache_size": 128, 00:19:25.252 "large_cache_size": 16, 00:19:25.252 "task_count": 2048, 00:19:25.252 "sequence_count": 2048, 00:19:25.252 "buf_count": 2048 00:19:25.252 } 00:19:25.252 } 00:19:25.252 ] 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "subsystem": "bdev", 00:19:25.253 "config": [ 00:19:25.253 { 00:19:25.253 "method": "bdev_set_options", 00:19:25.253 "params": { 00:19:25.253 "bdev_io_pool_size": 65535, 00:19:25.253 "bdev_io_cache_size": 256, 00:19:25.253 "bdev_auto_examine": true, 00:19:25.253 "iobuf_small_cache_size": 128, 00:19:25.253 "iobuf_large_cache_size": 16 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "bdev_raid_set_options", 00:19:25.253 "params": { 00:19:25.253 "process_window_size_kb": 1024, 00:19:25.253 "process_max_bandwidth_mb_sec": 0 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "bdev_iscsi_set_options", 00:19:25.253 "params": { 00:19:25.253 "timeout_sec": 30 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "bdev_nvme_set_options", 00:19:25.253 "params": { 00:19:25.253 "action_on_timeout": "none", 00:19:25.253 "timeout_us": 0, 00:19:25.253 "timeout_admin_us": 0, 00:19:25.253 "keep_alive_timeout_ms": 10000, 00:19:25.253 "arbitration_burst": 0, 00:19:25.253 "low_priority_weight": 0, 00:19:25.253 "medium_priority_weight": 0, 00:19:25.253 "high_priority_weight": 0, 00:19:25.253 "nvme_adminq_poll_period_us": 10000, 00:19:25.253 "nvme_ioq_poll_period_us": 0, 00:19:25.253 "io_queue_requests": 0, 00:19:25.253 "delay_cmd_submit": true, 00:19:25.253 "transport_retry_count": 4, 00:19:25.253 "bdev_retry_count": 3, 00:19:25.253 "transport_ack_timeout": 0, 00:19:25.253 "ctrlr_loss_timeout_sec": 0, 00:19:25.253 "reconnect_delay_sec": 0, 00:19:25.253 "fast_io_fail_timeout_sec": 0, 00:19:25.253 "disable_auto_failback": false, 00:19:25.253 "generate_uuids": false, 00:19:25.253 "transport_tos": 0, 00:19:25.253 "nvme_error_stat": false, 00:19:25.253 "rdma_srq_size": 0, 00:19:25.253 "io_path_stat": false, 00:19:25.253 "allow_accel_sequence": false, 00:19:25.253 "rdma_max_cq_size": 0, 00:19:25.253 "rdma_cm_event_timeout_ms": 0, 00:19:25.253 "dhchap_digests": [ 00:19:25.253 "sha256", 00:19:25.253 "sha384", 00:19:25.253 "sha512" 00:19:25.253 ], 00:19:25.253 "dhchap_dhgroups": [ 00:19:25.253 "null", 00:19:25.253 "ffdhe2048", 00:19:25.253 "ffdhe3072", 00:19:25.253 "ffdhe4096", 00:19:25.253 "ffdhe6144", 00:19:25.253 "ffdhe8192" 00:19:25.253 ] 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "bdev_nvme_set_hotplug", 00:19:25.253 "params": { 00:19:25.253 "period_us": 100000, 00:19:25.253 "enable": false 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "bdev_malloc_create", 00:19:25.253 "params": { 00:19:25.253 "name": "malloc0", 00:19:25.253 "num_blocks": 8192, 00:19:25.253 "block_size": 4096, 00:19:25.253 "physical_block_size": 4096, 00:19:25.253 "uuid": "e7a182c9-e177-45c2-8cba-981a607d2ca8", 00:19:25.253 "optimal_io_boundary": 0, 00:19:25.253 "md_size": 0, 00:19:25.253 "dif_type": 0, 00:19:25.253 "dif_is_head_of_md": false, 00:19:25.253 "dif_pi_format": 0 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "bdev_wait_for_examine" 00:19:25.253 } 00:19:25.253 ] 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "subsystem": "nbd", 00:19:25.253 "config": [] 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "subsystem": "scheduler", 00:19:25.253 "config": [ 00:19:25.253 { 00:19:25.253 "method": "framework_set_scheduler", 00:19:25.253 "params": { 00:19:25.253 "name": "static" 00:19:25.253 } 00:19:25.253 } 00:19:25.253 ] 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "subsystem": "nvmf", 00:19:25.253 "config": [ 00:19:25.253 { 00:19:25.253 "method": "nvmf_set_config", 00:19:25.253 "params": { 00:19:25.253 "discovery_filter": "match_any", 00:19:25.253 "admin_cmd_passthru": { 00:19:25.253 "identify_ctrlr": false 00:19:25.253 }, 00:19:25.253 "dhchap_digests": [ 00:19:25.253 "sha256", 00:19:25.253 "sha384", 00:19:25.253 "sha512" 00:19:25.253 ], 00:19:25.253 "dhchap_dhgroups": [ 00:19:25.253 "null", 00:19:25.253 "ffdhe2048", 00:19:25.253 "ffdhe3072", 00:19:25.253 "ffdhe4096", 00:19:25.253 "ffdhe6144", 00:19:25.253 "ffdhe8192" 00:19:25.253 ] 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "nvmf_set_max_subsystems", 00:19:25.253 "params": { 00:19:25.253 "max_subsystems": 1024 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "nvmf_set_crdt", 00:19:25.253 "params": { 00:19:25.253 "crdt1": 0, 00:19:25.253 "crdt2": 0, 00:19:25.253 "crdt3": 0 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "nvmf_create_transport", 00:19:25.253 "params": { 00:19:25.253 "trtype": "TCP", 00:19:25.253 "max_queue_depth": 128, 00:19:25.253 "max_io_qpairs_per_ctrlr": 127, 00:19:25.253 "in_capsule_data_size": 4096, 00:19:25.253 "max_io_size": 131072, 00:19:25.253 "io_unit_size": 131072, 00:19:25.253 "max_aq_depth": 128, 00:19:25.253 "num_shared_buffers": 511, 00:19:25.253 "buf_cache_size": 4294967295, 00:19:25.253 "dif_insert_or_strip": false, 00:19:25.253 "zcopy": false, 00:19:25.253 "c2h_success": false, 00:19:25.253 "sock_priority": 0, 00:19:25.253 "abort_timeout_sec": 1, 00:19:25.253 "ack_timeout": 0, 00:19:25.253 "data_wr_pool_size": 0 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "nvmf_create_subsystem", 00:19:25.253 "params": { 00:19:25.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.253 "allow_any_host": false, 00:19:25.253 "serial_number": "00000000000000000000", 00:19:25.253 "model_number": "SPDK bdev Controller", 00:19:25.253 "max_namespaces": 32, 00:19:25.253 "min_cntlid": 1, 00:19:25.253 "max_cntlid": 65519, 00:19:25.253 "ana_reporting": false 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "nvmf_subsystem_add_host", 00:19:25.253 "params": { 00:19:25.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.253 "host": "nqn.2016-06.io.spdk:host1", 00:19:25.253 "psk": "key0" 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "nvmf_subsystem_add_ns", 00:19:25.253 "params": { 00:19:25.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.253 "namespace": { 00:19:25.253 "nsid": 1, 00:19:25.253 "bdev_name": "malloc0", 00:19:25.253 "nguid": "E7A182C9E17745C28CBA981A607D2CA8", 00:19:25.253 "uuid": "e7a182c9-e177-45c2-8cba-981a607d2ca8", 00:19:25.253 "no_auto_visible": false 00:19:25.253 } 00:19:25.253 } 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "method": "nvmf_subsystem_add_listener", 00:19:25.253 "params": { 00:19:25.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.253 "listen_address": { 00:19:25.253 "trtype": "TCP", 00:19:25.253 "adrfam": "IPv4", 00:19:25.253 "traddr": "10.0.0.2", 00:19:25.253 "trsvcid": "4420" 00:19:25.253 }, 00:19:25.253 "secure_channel": false, 00:19:25.253 "sock_impl": "ssl" 00:19:25.253 } 00:19:25.253 } 00:19:25.253 ] 00:19:25.253 } 00:19:25.253 ] 00:19:25.253 }' 00:19:25.253 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:25.512 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:25.512 "subsystems": [ 00:19:25.512 { 00:19:25.512 "subsystem": "keyring", 00:19:25.512 "config": [ 00:19:25.512 { 00:19:25.512 "method": "keyring_file_add_key", 00:19:25.512 "params": { 00:19:25.512 "name": "key0", 00:19:25.512 "path": "/tmp/tmp.8NZE8H98Z0" 00:19:25.512 } 00:19:25.512 } 00:19:25.512 ] 00:19:25.512 }, 00:19:25.512 { 00:19:25.512 "subsystem": "iobuf", 00:19:25.512 "config": [ 00:19:25.512 { 00:19:25.512 "method": "iobuf_set_options", 00:19:25.512 "params": { 00:19:25.512 "small_pool_count": 8192, 00:19:25.512 "large_pool_count": 1024, 00:19:25.512 "small_bufsize": 8192, 00:19:25.512 "large_bufsize": 135168, 00:19:25.512 "enable_numa": false 00:19:25.512 } 00:19:25.512 } 00:19:25.512 ] 00:19:25.512 }, 00:19:25.512 { 00:19:25.512 "subsystem": "sock", 00:19:25.512 "config": [ 00:19:25.512 { 00:19:25.512 "method": "sock_set_default_impl", 00:19:25.512 "params": { 00:19:25.512 "impl_name": "posix" 00:19:25.512 } 00:19:25.512 }, 00:19:25.512 { 00:19:25.512 "method": "sock_impl_set_options", 00:19:25.512 "params": { 00:19:25.512 "impl_name": "ssl", 00:19:25.512 "recv_buf_size": 4096, 00:19:25.512 "send_buf_size": 4096, 00:19:25.512 "enable_recv_pipe": true, 00:19:25.512 "enable_quickack": false, 00:19:25.512 "enable_placement_id": 0, 00:19:25.512 "enable_zerocopy_send_server": true, 00:19:25.512 "enable_zerocopy_send_client": false, 00:19:25.512 "zerocopy_threshold": 0, 00:19:25.512 "tls_version": 0, 00:19:25.512 "enable_ktls": false 00:19:25.512 } 00:19:25.512 }, 00:19:25.512 { 00:19:25.512 "method": "sock_impl_set_options", 00:19:25.512 "params": { 00:19:25.512 "impl_name": "posix", 00:19:25.512 "recv_buf_size": 2097152, 00:19:25.512 "send_buf_size": 2097152, 00:19:25.512 "enable_recv_pipe": true, 00:19:25.512 "enable_quickack": false, 00:19:25.512 "enable_placement_id": 0, 00:19:25.512 "enable_zerocopy_send_server": true, 00:19:25.512 "enable_zerocopy_send_client": false, 00:19:25.512 "zerocopy_threshold": 0, 00:19:25.512 "tls_version": 0, 00:19:25.512 "enable_ktls": false 00:19:25.512 } 00:19:25.512 } 00:19:25.512 ] 00:19:25.512 }, 00:19:25.512 { 00:19:25.512 "subsystem": "vmd", 00:19:25.512 "config": [] 00:19:25.512 }, 00:19:25.512 { 00:19:25.512 "subsystem": "accel", 00:19:25.512 "config": [ 00:19:25.512 { 00:19:25.512 "method": "accel_set_options", 00:19:25.512 "params": { 00:19:25.512 "small_cache_size": 128, 00:19:25.512 "large_cache_size": 16, 00:19:25.512 "task_count": 2048, 00:19:25.512 "sequence_count": 2048, 00:19:25.512 "buf_count": 2048 00:19:25.512 } 00:19:25.512 } 00:19:25.512 ] 00:19:25.512 }, 00:19:25.512 { 00:19:25.512 "subsystem": "bdev", 00:19:25.512 "config": [ 00:19:25.512 { 00:19:25.512 "method": "bdev_set_options", 00:19:25.512 "params": { 00:19:25.512 "bdev_io_pool_size": 65535, 00:19:25.512 "bdev_io_cache_size": 256, 00:19:25.512 "bdev_auto_examine": true, 00:19:25.512 "iobuf_small_cache_size": 128, 00:19:25.512 "iobuf_large_cache_size": 16 00:19:25.512 } 00:19:25.512 }, 00:19:25.512 { 00:19:25.512 "method": "bdev_raid_set_options", 00:19:25.512 "params": { 00:19:25.512 "process_window_size_kb": 1024, 00:19:25.512 "process_max_bandwidth_mb_sec": 0 00:19:25.512 } 00:19:25.512 }, 00:19:25.512 { 00:19:25.512 "method": "bdev_iscsi_set_options", 00:19:25.512 "params": { 00:19:25.512 "timeout_sec": 30 00:19:25.512 } 00:19:25.512 }, 00:19:25.512 { 00:19:25.512 "method": "bdev_nvme_set_options", 00:19:25.512 "params": { 00:19:25.512 "action_on_timeout": "none", 00:19:25.512 "timeout_us": 0, 00:19:25.512 "timeout_admin_us": 0, 00:19:25.512 "keep_alive_timeout_ms": 10000, 00:19:25.512 "arbitration_burst": 0, 00:19:25.512 "low_priority_weight": 0, 00:19:25.512 "medium_priority_weight": 0, 00:19:25.512 "high_priority_weight": 0, 00:19:25.512 "nvme_adminq_poll_period_us": 10000, 00:19:25.512 "nvme_ioq_poll_period_us": 0, 00:19:25.512 "io_queue_requests": 512, 00:19:25.512 "delay_cmd_submit": true, 00:19:25.512 "transport_retry_count": 4, 00:19:25.512 "bdev_retry_count": 3, 00:19:25.512 "transport_ack_timeout": 0, 00:19:25.512 "ctrlr_loss_timeout_sec": 0, 00:19:25.512 "reconnect_delay_sec": 0, 00:19:25.512 "fast_io_fail_timeout_sec": 0, 00:19:25.512 "disable_auto_failback": false, 00:19:25.512 "generate_uuids": false, 00:19:25.512 "transport_tos": 0, 00:19:25.512 "nvme_error_stat": false, 00:19:25.512 "rdma_srq_size": 0, 00:19:25.512 "io_path_stat": false, 00:19:25.512 "allow_accel_sequence": false, 00:19:25.512 "rdma_max_cq_size": 0, 00:19:25.512 "rdma_cm_event_timeout_ms": 0, 00:19:25.512 "dhchap_digests": [ 00:19:25.512 "sha256", 00:19:25.512 "sha384", 00:19:25.512 "sha512" 00:19:25.512 ], 00:19:25.512 "dhchap_dhgroups": [ 00:19:25.512 "null", 00:19:25.512 "ffdhe2048", 00:19:25.512 "ffdhe3072", 00:19:25.512 "ffdhe4096", 00:19:25.512 "ffdhe6144", 00:19:25.512 "ffdhe8192" 00:19:25.512 ] 00:19:25.512 } 00:19:25.513 }, 00:19:25.513 { 00:19:25.513 "method": "bdev_nvme_attach_controller", 00:19:25.513 "params": { 00:19:25.513 "name": "nvme0", 00:19:25.513 "trtype": "TCP", 00:19:25.513 "adrfam": "IPv4", 00:19:25.513 "traddr": "10.0.0.2", 00:19:25.513 "trsvcid": "4420", 00:19:25.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.513 "prchk_reftag": false, 00:19:25.513 "prchk_guard": false, 00:19:25.513 "ctrlr_loss_timeout_sec": 0, 00:19:25.513 "reconnect_delay_sec": 0, 00:19:25.513 "fast_io_fail_timeout_sec": 0, 00:19:25.513 "psk": "key0", 00:19:25.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.513 "hdgst": false, 00:19:25.513 "ddgst": false, 00:19:25.513 "multipath": "multipath" 00:19:25.513 } 00:19:25.513 }, 00:19:25.513 { 00:19:25.513 "method": "bdev_nvme_set_hotplug", 00:19:25.513 "params": { 00:19:25.513 "period_us": 100000, 00:19:25.513 "enable": false 00:19:25.513 } 00:19:25.513 }, 00:19:25.513 { 00:19:25.513 "method": "bdev_enable_histogram", 00:19:25.513 "params": { 00:19:25.513 "name": "nvme0n1", 00:19:25.513 "enable": true 00:19:25.513 } 00:19:25.513 }, 00:19:25.513 { 00:19:25.513 "method": "bdev_wait_for_examine" 00:19:25.513 } 00:19:25.513 ] 00:19:25.513 }, 00:19:25.513 { 00:19:25.513 "subsystem": "nbd", 00:19:25.513 "config": [] 00:19:25.513 } 00:19:25.513 ] 00:19:25.513 }' 00:19:25.513 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1925838 00:19:25.513 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1925838 ']' 00:19:25.513 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1925838 00:19:25.513 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:25.513 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.513 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1925838 00:19:25.513 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:25.513 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:25.513 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1925838' 00:19:25.513 killing process with pid 1925838 00:19:25.513 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1925838 00:19:25.513 Received shutdown signal, test time was about 1.000000 seconds 00:19:25.513 00:19:25.513 Latency(us) 00:19:25.513 [2024-12-09T16:29:52.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.513 [2024-12-09T16:29:52.053Z] =================================================================================================================== 00:19:25.513 [2024-12-09T16:29:52.053Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.513 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1925838 00:19:25.772 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1925702 00:19:25.772 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1925702 ']' 00:19:25.772 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1925702 00:19:25.772 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:25.772 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.772 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1925702 00:19:25.772 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.772 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.772 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1925702' 00:19:25.772 killing process with pid 1925702 00:19:25.772 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1925702 00:19:25.772 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1925702 00:19:26.031 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:26.031 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.031 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.031 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:26.031 "subsystems": [ 00:19:26.031 { 00:19:26.031 "subsystem": "keyring", 00:19:26.031 "config": [ 00:19:26.031 { 00:19:26.031 "method": "keyring_file_add_key", 00:19:26.031 "params": { 00:19:26.031 "name": "key0", 00:19:26.031 "path": "/tmp/tmp.8NZE8H98Z0" 00:19:26.031 } 00:19:26.031 } 00:19:26.031 ] 00:19:26.031 }, 00:19:26.031 { 00:19:26.031 "subsystem": "iobuf", 00:19:26.031 "config": [ 00:19:26.031 { 00:19:26.031 "method": "iobuf_set_options", 00:19:26.031 "params": { 00:19:26.031 "small_pool_count": 8192, 00:19:26.031 "large_pool_count": 1024, 00:19:26.031 "small_bufsize": 8192, 00:19:26.031 "large_bufsize": 135168, 00:19:26.031 "enable_numa": false 00:19:26.031 } 00:19:26.031 } 00:19:26.031 ] 00:19:26.031 }, 00:19:26.031 { 00:19:26.031 "subsystem": "sock", 00:19:26.031 "config": [ 00:19:26.031 { 00:19:26.031 "method": "sock_set_default_impl", 00:19:26.031 "params": { 00:19:26.031 "impl_name": "posix" 00:19:26.031 } 00:19:26.031 }, 00:19:26.031 { 00:19:26.031 "method": "sock_impl_set_options", 00:19:26.031 "params": { 00:19:26.031 "impl_name": "ssl", 00:19:26.031 "recv_buf_size": 4096, 00:19:26.031 "send_buf_size": 4096, 00:19:26.032 "enable_recv_pipe": true, 00:19:26.032 "enable_quickack": false, 00:19:26.032 "enable_placement_id": 0, 00:19:26.032 "enable_zerocopy_send_server": true, 00:19:26.032 "enable_zerocopy_send_client": false, 00:19:26.032 "zerocopy_threshold": 0, 00:19:26.032 "tls_version": 0, 00:19:26.032 "enable_ktls": false 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "sock_impl_set_options", 00:19:26.032 "params": { 00:19:26.032 "impl_name": "posix", 00:19:26.032 "recv_buf_size": 2097152, 00:19:26.032 "send_buf_size": 2097152, 00:19:26.032 "enable_recv_pipe": true, 00:19:26.032 "enable_quickack": false, 00:19:26.032 "enable_placement_id": 0, 00:19:26.032 "enable_zerocopy_send_server": true, 00:19:26.032 "enable_zerocopy_send_client": false, 00:19:26.032 "zerocopy_threshold": 0, 00:19:26.032 "tls_version": 0, 00:19:26.032 "enable_ktls": false 00:19:26.032 } 00:19:26.032 } 00:19:26.032 ] 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "subsystem": "vmd", 00:19:26.032 "config": [] 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "subsystem": "accel", 00:19:26.032 "config": [ 00:19:26.032 { 00:19:26.032 "method": "accel_set_options", 00:19:26.032 "params": { 00:19:26.032 "small_cache_size": 128, 00:19:26.032 "large_cache_size": 16, 00:19:26.032 "task_count": 2048, 00:19:26.032 "sequence_count": 2048, 00:19:26.032 "buf_count": 2048 00:19:26.032 } 00:19:26.032 } 00:19:26.032 ] 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "subsystem": "bdev", 00:19:26.032 "config": [ 00:19:26.032 { 00:19:26.032 "method": "bdev_set_options", 00:19:26.032 "params": { 00:19:26.032 "bdev_io_pool_size": 65535, 00:19:26.032 "bdev_io_cache_size": 256, 00:19:26.032 "bdev_auto_examine": true, 00:19:26.032 "iobuf_small_cache_size": 128, 00:19:26.032 "iobuf_large_cache_size": 16 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "bdev_raid_set_options", 00:19:26.032 "params": { 00:19:26.032 "process_window_size_kb": 1024, 00:19:26.032 "process_max_bandwidth_mb_sec": 0 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "bdev_iscsi_set_options", 00:19:26.032 "params": { 00:19:26.032 "timeout_sec": 30 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "bdev_nvme_set_options", 00:19:26.032 "params": { 00:19:26.032 "action_on_timeout": "none", 00:19:26.032 "timeout_us": 0, 00:19:26.032 "timeout_admin_us": 0, 00:19:26.032 "keep_alive_timeout_ms": 10000, 00:19:26.032 "arbitration_burst": 0, 00:19:26.032 "low_priority_weight": 0, 00:19:26.032 "medium_priority_weight": 0, 00:19:26.032 "high_priority_weight": 0, 00:19:26.032 "nvme_adminq_poll_period_us": 10000, 00:19:26.032 "nvme_ioq_poll_period_us": 0, 00:19:26.032 "io_queue_requests": 0, 00:19:26.032 "delay_cmd_submit": true, 00:19:26.032 "transport_retry_count": 4, 00:19:26.032 "bdev_retry_count": 3, 00:19:26.032 "transport_ack_timeout": 0, 00:19:26.032 "ctrlr_loss_timeout_sec": 0, 00:19:26.032 "reconnect_delay_sec": 0, 00:19:26.032 "fast_io_fail_timeout_sec": 0, 00:19:26.032 "disable_auto_failback": false, 00:19:26.032 "generate_uuids": false, 00:19:26.032 "transport_tos": 0, 00:19:26.032 "nvme_error_stat": false, 00:19:26.032 "rdma_srq_size": 0, 00:19:26.032 "io_path_stat": false, 00:19:26.032 "allow_accel_sequence": false, 00:19:26.032 "rdma_max_cq_size": 0, 00:19:26.032 "rdma_cm_event_timeout_ms": 0, 00:19:26.032 "dhchap_digests": [ 00:19:26.032 "sha256", 00:19:26.032 "sha384", 00:19:26.032 "sha512" 00:19:26.032 ], 00:19:26.032 "dhchap_dhgroups": [ 00:19:26.032 "null", 00:19:26.032 "ffdhe2048", 00:19:26.032 "ffdhe3072", 00:19:26.032 "ffdhe4096", 00:19:26.032 "ffdhe6144", 00:19:26.032 "ffdhe8192" 00:19:26.032 ] 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "bdev_nvme_set_hotplug", 00:19:26.032 "params": { 00:19:26.032 "period_us": 100000, 00:19:26.032 "enable": false 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "bdev_malloc_create", 00:19:26.032 "params": { 00:19:26.032 "name": "malloc0", 00:19:26.032 "num_blocks": 8192, 00:19:26.032 "block_size": 4096, 00:19:26.032 "physical_block_size": 4096, 00:19:26.032 "uuid": "e7a182c9-e177-45c2-8cba-981a607d2ca8", 00:19:26.032 "optimal_io_boundary": 0, 00:19:26.032 "md_size": 0, 00:19:26.032 "dif_type": 0, 00:19:26.032 "dif_is_head_of_md": false, 00:19:26.032 "dif_pi_format": 0 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "bdev_wait_for_examine" 00:19:26.032 } 00:19:26.032 ] 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "subsystem": "nbd", 00:19:26.032 "config": [] 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "subsystem": "scheduler", 00:19:26.032 "config": [ 00:19:26.032 { 00:19:26.032 "method": "framework_set_scheduler", 00:19:26.032 "params": { 00:19:26.032 "name": "static" 00:19:26.032 } 00:19:26.032 } 00:19:26.032 ] 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "subsystem": "nvmf", 00:19:26.032 "config": [ 00:19:26.032 { 00:19:26.032 "method": "nvmf_set_config", 00:19:26.032 "params": { 00:19:26.032 "discovery_filter": "match_any", 00:19:26.032 "admin_cmd_passthru": { 00:19:26.032 "identify_ctrlr": false 00:19:26.032 }, 00:19:26.032 "dhchap_digests": [ 00:19:26.032 "sha256", 00:19:26.032 "sha384", 00:19:26.032 "sha512" 00:19:26.032 ], 00:19:26.032 "dhchap_dhgroups": [ 00:19:26.032 "null", 00:19:26.032 "ffdhe2048", 00:19:26.032 "ffdhe3072", 00:19:26.032 "ffdhe4096", 00:19:26.032 "ffdhe6144", 00:19:26.032 "ffdhe8192" 00:19:26.032 ] 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "nvmf_set_max_subsystems", 00:19:26.032 "params": { 00:19:26.032 "max_subsystems": 1024 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "nvmf_set_crdt", 00:19:26.032 "params": { 00:19:26.032 "crdt1": 0, 00:19:26.032 "crdt2": 0, 00:19:26.032 "crdt3": 0 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "nvmf_create_transport", 00:19:26.032 "params": { 00:19:26.032 "trtype": "TCP", 00:19:26.032 "max_queue_depth": 128, 00:19:26.032 "max_io_qpairs_per_ctrlr": 127, 00:19:26.032 "in_capsule_data_size": 4096, 00:19:26.032 "max_io_size": 131072, 00:19:26.032 "io_unit_size": 131072, 00:19:26.032 "max_aq_depth": 128, 00:19:26.032 "num_shared_buffers": 511, 00:19:26.032 "buf_cache_size": 4294967295, 00:19:26.032 "dif_insert_or_strip": false, 00:19:26.032 "zcopy": false, 00:19:26.032 "c2h_success": false, 00:19:26.032 "sock_priority": 0, 00:19:26.032 "abort_timeout_sec": 1, 00:19:26.032 "ack_timeout": 0, 00:19:26.032 "data_wr_pool_size": 0 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "nvmf_create_subsystem", 00:19:26.032 "params": { 00:19:26.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.032 "allow_any_host": false, 00:19:26.032 "serial_number": "00000000000000000000", 00:19:26.032 "model_number": "SPDK bdev Controller", 00:19:26.032 "max_namespaces": 32, 00:19:26.032 "min_cntlid": 1, 00:19:26.032 "max_cntlid": 65519, 00:19:26.032 "ana_reporting": false 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "nvmf_subsystem_add_host", 00:19:26.032 "params": { 00:19:26.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.032 "host": "nqn.2016-06.io.spdk:host1", 00:19:26.032 "psk": "key0" 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "nvmf_subsystem_add_ns", 00:19:26.032 "params": { 00:19:26.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.032 "namespace": { 00:19:26.032 "nsid": 1, 00:19:26.032 "bdev_name": "malloc0", 00:19:26.032 "nguid": "E7A182C9E17745C28CBA981A607D2CA8", 00:19:26.032 "uuid": "e7a182c9-e177-45c2-8cba-981a607d2ca8", 00:19:26.032 "no_auto_visible": false 00:19:26.032 } 00:19:26.032 } 00:19:26.032 }, 00:19:26.032 { 00:19:26.032 "method": "nvmf_subsystem_add_listener", 00:19:26.032 "params": { 00:19:26.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.032 "listen_address": { 00:19:26.032 "trtype": "TCP", 00:19:26.032 "adrfam": "IPv4", 00:19:26.032 "traddr": "10.0.0.2", 00:19:26.032 "trsvcid": "4420" 00:19:26.032 }, 00:19:26.032 "secure_channel": false, 00:19:26.032 "sock_impl": "ssl" 00:19:26.032 } 00:19:26.032 } 00:19:26.032 ] 00:19:26.032 } 00:19:26.032 ] 00:19:26.032 }' 00:19:26.032 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.032 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1926303 00:19:26.032 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:26.032 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1926303 00:19:26.032 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1926303 ']' 00:19:26.032 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.032 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.032 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.032 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.032 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.032 [2024-12-09 17:29:52.418300] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:26.032 [2024-12-09 17:29:52.418346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.032 [2024-12-09 17:29:52.497094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.032 [2024-12-09 17:29:52.535920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.032 [2024-12-09 17:29:52.535956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.032 [2024-12-09 17:29:52.535963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.032 [2024-12-09 17:29:52.535969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.032 [2024-12-09 17:29:52.535974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.032 [2024-12-09 17:29:52.536484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.291 [2024-12-09 17:29:52.749210] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.291 [2024-12-09 17:29:52.781250] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:26.291 [2024-12-09 17:29:52.781454] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1926338 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1926338 /var/tmp/bdevperf.sock 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1926338 ']' 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.858 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:26.858 "subsystems": [ 00:19:26.858 { 00:19:26.858 "subsystem": "keyring", 00:19:26.858 "config": [ 00:19:26.858 { 00:19:26.858 "method": "keyring_file_add_key", 00:19:26.858 "params": { 00:19:26.858 "name": "key0", 00:19:26.858 "path": "/tmp/tmp.8NZE8H98Z0" 00:19:26.858 } 00:19:26.858 } 00:19:26.858 ] 00:19:26.858 }, 00:19:26.858 { 00:19:26.858 "subsystem": "iobuf", 00:19:26.858 "config": [ 00:19:26.858 { 00:19:26.858 "method": "iobuf_set_options", 00:19:26.858 "params": { 00:19:26.858 "small_pool_count": 8192, 00:19:26.858 "large_pool_count": 1024, 00:19:26.858 "small_bufsize": 8192, 00:19:26.858 "large_bufsize": 135168, 00:19:26.858 "enable_numa": false 00:19:26.858 } 00:19:26.858 } 00:19:26.858 ] 00:19:26.858 }, 00:19:26.858 { 00:19:26.858 "subsystem": "sock", 00:19:26.858 "config": [ 00:19:26.858 { 00:19:26.858 "method": "sock_set_default_impl", 00:19:26.858 "params": { 00:19:26.858 "impl_name": "posix" 00:19:26.858 } 00:19:26.858 }, 00:19:26.858 { 00:19:26.858 "method": "sock_impl_set_options", 00:19:26.858 "params": { 00:19:26.858 "impl_name": "ssl", 00:19:26.858 "recv_buf_size": 4096, 00:19:26.858 "send_buf_size": 4096, 00:19:26.858 "enable_recv_pipe": true, 00:19:26.858 "enable_quickack": false, 00:19:26.858 "enable_placement_id": 0, 00:19:26.858 "enable_zerocopy_send_server": true, 00:19:26.858 "enable_zerocopy_send_client": false, 00:19:26.858 "zerocopy_threshold": 0, 00:19:26.858 "tls_version": 0, 00:19:26.858 "enable_ktls": false 00:19:26.858 } 00:19:26.858 }, 00:19:26.858 { 00:19:26.858 "method": "sock_impl_set_options", 00:19:26.858 "params": { 00:19:26.858 "impl_name": "posix", 00:19:26.858 "recv_buf_size": 2097152, 00:19:26.858 "send_buf_size": 2097152, 00:19:26.858 "enable_recv_pipe": true, 00:19:26.858 "enable_quickack": false, 00:19:26.858 "enable_placement_id": 0, 00:19:26.858 "enable_zerocopy_send_server": true, 00:19:26.858 "enable_zerocopy_send_client": false, 00:19:26.858 "zerocopy_threshold": 0, 00:19:26.858 "tls_version": 0, 00:19:26.858 "enable_ktls": false 00:19:26.858 } 00:19:26.858 } 00:19:26.858 ] 00:19:26.858 }, 00:19:26.858 { 00:19:26.858 "subsystem": "vmd", 00:19:26.858 "config": [] 00:19:26.858 }, 00:19:26.858 { 00:19:26.858 "subsystem": "accel", 00:19:26.858 "config": [ 00:19:26.858 { 00:19:26.858 "method": "accel_set_options", 00:19:26.858 "params": { 00:19:26.858 "small_cache_size": 128, 00:19:26.858 "large_cache_size": 16, 00:19:26.858 "task_count": 2048, 00:19:26.858 "sequence_count": 2048, 00:19:26.858 "buf_count": 2048 00:19:26.858 } 00:19:26.858 } 00:19:26.858 ] 00:19:26.858 }, 00:19:26.858 { 00:19:26.858 "subsystem": "bdev", 00:19:26.858 "config": [ 00:19:26.858 { 00:19:26.858 "method": "bdev_set_options", 00:19:26.858 "params": { 00:19:26.858 "bdev_io_pool_size": 65535, 00:19:26.858 "bdev_io_cache_size": 256, 00:19:26.858 "bdev_auto_examine": true, 00:19:26.858 "iobuf_small_cache_size": 128, 00:19:26.858 "iobuf_large_cache_size": 16 00:19:26.858 } 00:19:26.858 }, 00:19:26.858 { 00:19:26.858 "method": "bdev_raid_set_options", 00:19:26.858 "params": { 00:19:26.858 "process_window_size_kb": 1024, 00:19:26.858 "process_max_bandwidth_mb_sec": 0 00:19:26.858 } 00:19:26.858 }, 00:19:26.858 { 00:19:26.858 "method": "bdev_iscsi_set_options", 00:19:26.858 "params": { 00:19:26.858 "timeout_sec": 30 00:19:26.858 } 00:19:26.858 }, 00:19:26.858 { 00:19:26.858 "method": "bdev_nvme_set_options", 00:19:26.858 "params": { 00:19:26.858 "action_on_timeout": "none", 00:19:26.858 "timeout_us": 0, 00:19:26.858 "timeout_admin_us": 0, 00:19:26.858 "keep_alive_timeout_ms": 10000, 00:19:26.858 "arbitration_burst": 0, 00:19:26.858 "low_priority_weight": 0, 00:19:26.858 "medium_priority_weight": 0, 00:19:26.858 "high_priority_weight": 0, 00:19:26.858 "nvme_adminq_poll_period_us": 10000, 00:19:26.858 "nvme_ioq_poll_period_us": 0, 00:19:26.858 "io_queue_requests": 512, 00:19:26.858 "delay_cmd_submit": true, 00:19:26.858 "transport_retry_count": 4, 00:19:26.858 "bdev_retry_count": 3, 00:19:26.858 "transport_ack_timeout": 0, 00:19:26.858 "ctrlr_loss_timeout_sec": 0, 00:19:26.858 "reconnect_delay_sec": 0, 00:19:26.858 "fast_io_fail_timeout_sec": 0, 00:19:26.858 "disable_auto_failback": false, 00:19:26.858 "generate_uuids": false, 00:19:26.858 "transport_tos": 0, 00:19:26.858 "nvme_error_stat": false, 00:19:26.858 "rdma_srq_size": 0, 00:19:26.858 "io_path_stat": false, 00:19:26.858 "allow_accel_sequence": false, 00:19:26.858 "rdma_max_cq_size": 0, 00:19:26.858 "rdma_cm_event_timeout_ms": 0, 00:19:26.858 "dhchap_digests": [ 00:19:26.858 "sha256", 00:19:26.858 "sha384", 00:19:26.858 "sha512" 00:19:26.858 ], 00:19:26.858 "dhchap_dhgroups": [ 00:19:26.858 "null", 00:19:26.858 "ffdhe2048", 00:19:26.858 "ffdhe3072", 00:19:26.858 "ffdhe4096", 00:19:26.858 "ffdhe6144", 00:19:26.858 "ffdhe8192" 00:19:26.858 ] 00:19:26.858 } 00:19:26.858 }, 00:19:26.858 { 00:19:26.858 "method": "bdev_nvme_attach_controller", 00:19:26.858 "params": { 00:19:26.858 "name": "nvme0", 00:19:26.858 "trtype": "TCP", 00:19:26.858 "adrfam": "IPv4", 00:19:26.858 "traddr": "10.0.0.2", 00:19:26.858 "trsvcid": "4420", 00:19:26.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.858 "prchk_reftag": false, 00:19:26.858 "prchk_guard": false, 00:19:26.858 "ctrlr_loss_timeout_sec": 0, 00:19:26.858 "reconnect_delay_sec": 0, 00:19:26.858 "fast_io_fail_timeout_sec": 0, 00:19:26.858 "psk": "key0", 00:19:26.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.858 "hdgst": false, 00:19:26.858 "ddgst": false, 00:19:26.858 "multipath": "multipath" 00:19:26.858 } 00:19:26.858 }, 00:19:26.858 { 00:19:26.858 "method": "bdev_nvme_set_hotplug", 00:19:26.858 "params": { 00:19:26.858 "period_us": 100000, 00:19:26.859 "enable": false 00:19:26.859 } 00:19:26.859 }, 00:19:26.859 { 00:19:26.859 "method": "bdev_enable_histogram", 00:19:26.859 "params": { 00:19:26.859 "name": "nvme0n1", 00:19:26.859 "enable": true 00:19:26.859 } 00:19:26.859 }, 00:19:26.859 { 00:19:26.859 "method": "bdev_wait_for_examine" 00:19:26.859 } 00:19:26.859 ] 00:19:26.859 }, 00:19:26.859 { 00:19:26.859 "subsystem": "nbd", 00:19:26.859 "config": [] 00:19:26.859 } 00:19:26.859 ] 00:19:26.859 }' 00:19:26.859 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.859 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.859 [2024-12-09 17:29:53.326038] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:26.859 [2024-12-09 17:29:53.326083] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926338 ] 00:19:27.118 [2024-12-09 17:29:53.399857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.118 [2024-12-09 17:29:53.440447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.118 [2024-12-09 17:29:53.592941] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.685 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.685 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:27.685 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:27.685 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:27.945 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.945 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.945 Running I/O for 1 seconds... 00:19:29.322 5472.00 IOPS, 21.38 MiB/s 00:19:29.322 Latency(us) 00:19:29.322 [2024-12-09T16:29:55.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.322 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:29.322 Verification LBA range: start 0x0 length 0x2000 00:19:29.322 nvme0n1 : 1.01 5522.63 21.57 0.00 0.00 23017.54 5523.75 21845.33 00:19:29.322 [2024-12-09T16:29:55.862Z] =================================================================================================================== 00:19:29.322 [2024-12-09T16:29:55.862Z] Total : 5522.63 21.57 0.00 0.00 23017.54 5523.75 21845.33 00:19:29.322 { 00:19:29.322 "results": [ 00:19:29.322 { 00:19:29.322 "job": "nvme0n1", 00:19:29.322 "core_mask": "0x2", 00:19:29.322 "workload": "verify", 00:19:29.322 "status": "finished", 00:19:29.322 "verify_range": { 00:19:29.322 "start": 0, 00:19:29.322 "length": 8192 00:19:29.322 }, 00:19:29.322 "queue_depth": 128, 00:19:29.322 "io_size": 4096, 00:19:29.322 "runtime": 1.014009, 00:19:29.322 "iops": 5522.633428302905, 00:19:29.322 "mibps": 21.57278682930822, 00:19:29.322 "io_failed": 0, 00:19:29.322 "io_timeout": 0, 00:19:29.322 "avg_latency_us": 23017.5444462585, 00:19:29.322 "min_latency_us": 5523.748571428571, 00:19:29.322 "max_latency_us": 21845.333333333332 00:19:29.322 } 00:19:29.322 ], 00:19:29.322 "core_count": 1 00:19:29.322 } 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:29.322 nvmf_trace.0 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1926338 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1926338 ']' 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1926338 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.322 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1926338 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1926338' 00:19:29.323 killing process with pid 1926338 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1926338 00:19:29.323 Received shutdown signal, test time was about 1.000000 seconds 00:19:29.323 00:19:29.323 Latency(us) 00:19:29.323 [2024-12-09T16:29:55.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.323 [2024-12-09T16:29:55.863Z] =================================================================================================================== 00:19:29.323 [2024-12-09T16:29:55.863Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1926338 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:29.323 rmmod nvme_tcp 00:19:29.323 rmmod nvme_fabrics 00:19:29.323 rmmod nvme_keyring 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1926303 ']' 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1926303 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1926303 ']' 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1926303 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.323 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1926303 00:19:29.582 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.582 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.582 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1926303' 00:19:29.582 killing process with pid 1926303 00:19:29.582 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1926303 00:19:29.582 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1926303 00:19:29.583 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:29.583 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:29.583 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:29.583 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:29.583 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:29.583 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:29.583 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:29.583 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:29.583 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:29.583 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.583 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.583 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Av9keFCV44 /tmp/tmp.tyNddV1qCK /tmp/tmp.8NZE8H98Z0 00:19:32.120 00:19:32.120 real 1m19.172s 00:19:32.120 user 2m1.798s 00:19:32.120 sys 0m29.787s 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.120 ************************************ 00:19:32.120 END TEST nvmf_tls 00:19:32.120 ************************************ 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.120 ************************************ 00:19:32.120 START TEST nvmf_fips 00:19:32.120 ************************************ 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:32.120 * Looking for test storage... 00:19:32.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:32.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.120 --rc genhtml_branch_coverage=1 00:19:32.120 --rc genhtml_function_coverage=1 00:19:32.120 --rc genhtml_legend=1 00:19:32.120 --rc geninfo_all_blocks=1 00:19:32.120 --rc geninfo_unexecuted_blocks=1 00:19:32.120 00:19:32.120 ' 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:32.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.120 --rc genhtml_branch_coverage=1 00:19:32.120 --rc genhtml_function_coverage=1 00:19:32.120 --rc genhtml_legend=1 00:19:32.120 --rc geninfo_all_blocks=1 00:19:32.120 --rc geninfo_unexecuted_blocks=1 00:19:32.120 00:19:32.120 ' 00:19:32.120 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:32.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.120 --rc genhtml_branch_coverage=1 00:19:32.120 --rc genhtml_function_coverage=1 00:19:32.120 --rc genhtml_legend=1 00:19:32.120 --rc geninfo_all_blocks=1 00:19:32.120 --rc geninfo_unexecuted_blocks=1 00:19:32.120 00:19:32.120 ' 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:32.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.121 --rc genhtml_branch_coverage=1 00:19:32.121 --rc genhtml_function_coverage=1 00:19:32.121 --rc genhtml_legend=1 00:19:32.121 --rc geninfo_all_blocks=1 00:19:32.121 --rc geninfo_unexecuted_blocks=1 00:19:32.121 00:19:32.121 ' 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:32.121 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:32.122 Error setting digest 00:19:32.122 4042BBE5457F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:32.122 4042BBE5457F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:32.122 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.811 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:38.812 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:38.812 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:38.812 Found net devices under 0000:af:00.0: cvl_0_0 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:38.812 Found net devices under 0000:af:00.1: cvl_0_1 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:38.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:19:38.812 00:19:38.812 --- 10.0.0.2 ping statistics --- 00:19:38.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.812 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:38.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:19:38.812 00:19:38.812 --- 10.0.0.1 ping statistics --- 00:19:38.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.812 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1930414 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1930414 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1930414 ']' 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.812 17:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:38.812 [2024-12-09 17:30:04.586802] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:38.812 [2024-12-09 17:30:04.586851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.812 [2024-12-09 17:30:04.663123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.812 [2024-12-09 17:30:04.702686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.812 [2024-12-09 17:30:04.702733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.812 [2024-12-09 17:30:04.702741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.813 [2024-12-09 17:30:04.702748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.813 [2024-12-09 17:30:04.702753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.813 [2024-12-09 17:30:04.703242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.EEC 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.EEC 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.EEC 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.EEC 00:19:39.070 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:39.329 [2024-12-09 17:30:05.632943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.329 [2024-12-09 17:30:05.648941] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.329 [2024-12-09 17:30:05.649131] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.329 malloc0 00:19:39.329 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.329 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1930655 00:19:39.329 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1930655 /var/tmp/bdevperf.sock 00:19:39.329 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:39.329 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1930655 ']' 00:19:39.329 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.329 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.329 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.329 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.329 17:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:39.329 [2024-12-09 17:30:05.775831] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:39.329 [2024-12-09 17:30:05.775879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930655 ] 00:19:39.329 [2024-12-09 17:30:05.852342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.588 [2024-12-09 17:30:05.892559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.155 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.155 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:40.155 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.EEC 00:19:40.414 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.673 [2024-12-09 17:30:06.994645] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.673 TLSTESTn1 00:19:40.673 17:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:40.673 Running I/O for 10 seconds... 00:19:42.984 5234.00 IOPS, 20.45 MiB/s [2024-12-09T16:30:10.460Z] 5242.00 IOPS, 20.48 MiB/s [2024-12-09T16:30:11.396Z] 5094.00 IOPS, 19.90 MiB/s [2024-12-09T16:30:12.331Z] 5029.00 IOPS, 19.64 MiB/s [2024-12-09T16:30:13.268Z] 4966.60 IOPS, 19.40 MiB/s [2024-12-09T16:30:14.204Z] 4975.00 IOPS, 19.43 MiB/s [2024-12-09T16:30:15.580Z] 4990.29 IOPS, 19.49 MiB/s [2024-12-09T16:30:16.515Z] 4981.62 IOPS, 19.46 MiB/s [2024-12-09T16:30:17.452Z] 4958.78 IOPS, 19.37 MiB/s [2024-12-09T16:30:17.452Z] 4963.00 IOPS, 19.39 MiB/s 00:19:50.912 Latency(us) 00:19:50.912 [2024-12-09T16:30:17.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.912 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.912 Verification LBA range: start 0x0 length 0x2000 00:19:50.912 TLSTESTn1 : 10.02 4967.22 19.40 0.00 0.00 25732.73 6522.39 30957.96 00:19:50.912 [2024-12-09T16:30:17.452Z] =================================================================================================================== 00:19:50.912 [2024-12-09T16:30:17.452Z] Total : 4967.22 19.40 0.00 0.00 25732.73 6522.39 30957.96 00:19:50.912 { 00:19:50.912 "results": [ 00:19:50.912 { 00:19:50.912 "job": "TLSTESTn1", 00:19:50.912 "core_mask": "0x4", 00:19:50.912 "workload": "verify", 00:19:50.912 "status": "finished", 00:19:50.912 "verify_range": { 00:19:50.912 "start": 0, 00:19:50.912 "length": 8192 00:19:50.912 }, 00:19:50.912 "queue_depth": 128, 00:19:50.912 "io_size": 4096, 00:19:50.912 "runtime": 10.017275, 00:19:50.912 "iops": 4967.219128954731, 00:19:50.912 "mibps": 19.403199722479417, 00:19:50.912 "io_failed": 0, 00:19:50.912 "io_timeout": 0, 00:19:50.912 "avg_latency_us": 25732.734063802134, 00:19:50.912 "min_latency_us": 6522.392380952381, 00:19:50.912 "max_latency_us": 30957.958095238097 00:19:50.912 } 00:19:50.912 ], 00:19:50.912 "core_count": 1 00:19:50.912 } 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:50.912 nvmf_trace.0 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1930655 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1930655 ']' 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1930655 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1930655 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1930655' 00:19:50.912 killing process with pid 1930655 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1930655 00:19:50.912 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.912 00:19:50.912 Latency(us) 00:19:50.912 [2024-12-09T16:30:17.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.912 [2024-12-09T16:30:17.452Z] =================================================================================================================== 00:19:50.912 [2024-12-09T16:30:17.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.912 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1930655 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:51.171 rmmod nvme_tcp 00:19:51.171 rmmod nvme_fabrics 00:19:51.171 rmmod nvme_keyring 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1930414 ']' 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1930414 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1930414 ']' 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1930414 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1930414 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1930414' 00:19:51.171 killing process with pid 1930414 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1930414 00:19:51.171 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1930414 00:19:51.430 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:51.430 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:51.430 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:51.430 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:51.430 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:51.430 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:51.430 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:51.430 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:51.431 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:51.431 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.431 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.431 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.966 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:53.966 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.EEC 00:19:53.966 00:19:53.966 real 0m21.695s 00:19:53.966 user 0m22.494s 00:19:53.966 sys 0m10.594s 00:19:53.966 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.966 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:53.966 ************************************ 00:19:53.966 END TEST nvmf_fips 00:19:53.966 ************************************ 00:19:53.966 17:30:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:53.966 17:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:53.966 17:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.966 17:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:53.966 ************************************ 00:19:53.966 START TEST nvmf_control_msg_list 00:19:53.966 ************************************ 00:19:53.966 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:53.966 * Looking for test storage... 00:19:53.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:53.966 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:53.966 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:53.966 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:53.966 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:53.966 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.966 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.966 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.966 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.966 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:53.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.967 --rc genhtml_branch_coverage=1 00:19:53.967 --rc genhtml_function_coverage=1 00:19:53.967 --rc genhtml_legend=1 00:19:53.967 --rc geninfo_all_blocks=1 00:19:53.967 --rc geninfo_unexecuted_blocks=1 00:19:53.967 00:19:53.967 ' 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:53.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.967 --rc genhtml_branch_coverage=1 00:19:53.967 --rc genhtml_function_coverage=1 00:19:53.967 --rc genhtml_legend=1 00:19:53.967 --rc geninfo_all_blocks=1 00:19:53.967 --rc geninfo_unexecuted_blocks=1 00:19:53.967 00:19:53.967 ' 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:53.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.967 --rc genhtml_branch_coverage=1 00:19:53.967 --rc genhtml_function_coverage=1 00:19:53.967 --rc genhtml_legend=1 00:19:53.967 --rc geninfo_all_blocks=1 00:19:53.967 --rc geninfo_unexecuted_blocks=1 00:19:53.967 00:19:53.967 ' 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:53.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.967 --rc genhtml_branch_coverage=1 00:19:53.967 --rc genhtml_function_coverage=1 00:19:53.967 --rc genhtml_legend=1 00:19:53.967 --rc geninfo_all_blocks=1 00:19:53.967 --rc geninfo_unexecuted_blocks=1 00:19:53.967 00:19:53.967 ' 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:53.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:53.967 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:53.968 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.968 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:53.968 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:53.968 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:53.968 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.968 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.968 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.968 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:53.968 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:53.968 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:53.968 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:59.242 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:59.243 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:59.243 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:59.243 Found net devices under 0000:af:00.0: cvl_0_0 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:59.243 Found net devices under 0000:af:00.1: cvl_0_1 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:59.243 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.502 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.502 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.502 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.502 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:59.502 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.502 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.502 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.502 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:59.502 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:59.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:19:59.502 00:19:59.502 --- 10.0.0.2 ping statistics --- 00:19:59.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.502 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:19:59.502 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:19:59.502 00:19:59.502 --- 10.0.0.1 ping statistics --- 00:19:59.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.502 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:19:59.502 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.502 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:59.502 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:59.502 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.502 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:59.502 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:59.502 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.502 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:59.502 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1936517 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1936517 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1936517 ']' 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.761 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:59.761 [2024-12-09 17:30:26.115740] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:19:59.761 [2024-12-09 17:30:26.115792] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.761 [2024-12-09 17:30:26.195144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.762 [2024-12-09 17:30:26.234975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.762 [2024-12-09 17:30:26.235008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.762 [2024-12-09 17:30:26.235015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.762 [2024-12-09 17:30:26.235021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.762 [2024-12-09 17:30:26.235026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.762 [2024-12-09 17:30:26.235487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.021 [2024-12-09 17:30:26.371287] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.021 Malloc0 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.021 [2024-12-09 17:30:26.411352] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1936540 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1936541 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1936542 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.021 17:30:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1936540 00:20:00.021 [2024-12-09 17:30:26.489762] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:00.021 [2024-12-09 17:30:26.509766] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:00.021 [2024-12-09 17:30:26.509905] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:01.398 Initializing NVMe Controllers 00:20:01.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:01.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:01.398 Initialization complete. Launching workers. 00:20:01.398 ======================================================== 00:20:01.398 Latency(us) 00:20:01.398 Device Information : IOPS MiB/s Average min max 00:20:01.398 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40927.15 40584.96 41978.32 00:20:01.398 ======================================================== 00:20:01.398 Total : 25.00 0.10 40927.15 40584.96 41978.32 00:20:01.398 00:20:01.398 Initializing NVMe Controllers 00:20:01.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:01.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:01.398 Initialization complete. Launching workers. 00:20:01.398 ======================================================== 00:20:01.398 Latency(us) 00:20:01.398 Device Information : IOPS MiB/s Average min max 00:20:01.398 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40913.97 40773.97 41217.41 00:20:01.398 ======================================================== 00:20:01.398 Total : 25.00 0.10 40913.97 40773.97 41217.41 00:20:01.398 00:20:01.398 Initializing NVMe Controllers 00:20:01.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:01.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:01.398 Initialization complete. Launching workers. 00:20:01.398 ======================================================== 00:20:01.398 Latency(us) 00:20:01.398 Device Information : IOPS MiB/s Average min max 00:20:01.398 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 899.00 3.51 1111.43 149.55 41035.70 00:20:01.398 ======================================================== 00:20:01.398 Total : 899.00 3.51 1111.43 149.55 41035.70 00:20:01.398 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1936541 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1936542 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:01.398 rmmod nvme_tcp 00:20:01.398 rmmod nvme_fabrics 00:20:01.398 rmmod nvme_keyring 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1936517 ']' 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1936517 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1936517 ']' 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1936517 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1936517 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1936517' 00:20:01.398 killing process with pid 1936517 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1936517 00:20:01.398 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1936517 00:20:01.657 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:01.657 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:01.657 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:01.657 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:01.657 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:01.657 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:01.657 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:01.657 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:01.657 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:01.657 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.657 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.658 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:04.191 00:20:04.191 real 0m10.145s 00:20:04.191 user 0m6.904s 00:20:04.191 sys 0m5.387s 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:04.191 ************************************ 00:20:04.191 END TEST nvmf_control_msg_list 00:20:04.191 ************************************ 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:04.191 ************************************ 00:20:04.191 START TEST nvmf_wait_for_buf 00:20:04.191 ************************************ 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:04.191 * Looking for test storage... 00:20:04.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:04.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.191 --rc genhtml_branch_coverage=1 00:20:04.191 --rc genhtml_function_coverage=1 00:20:04.191 --rc genhtml_legend=1 00:20:04.191 --rc geninfo_all_blocks=1 00:20:04.191 --rc geninfo_unexecuted_blocks=1 00:20:04.191 00:20:04.191 ' 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:04.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.191 --rc genhtml_branch_coverage=1 00:20:04.191 --rc genhtml_function_coverage=1 00:20:04.191 --rc genhtml_legend=1 00:20:04.191 --rc geninfo_all_blocks=1 00:20:04.191 --rc geninfo_unexecuted_blocks=1 00:20:04.191 00:20:04.191 ' 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:04.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.191 --rc genhtml_branch_coverage=1 00:20:04.191 --rc genhtml_function_coverage=1 00:20:04.191 --rc genhtml_legend=1 00:20:04.191 --rc geninfo_all_blocks=1 00:20:04.191 --rc geninfo_unexecuted_blocks=1 00:20:04.191 00:20:04.191 ' 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:04.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.191 --rc genhtml_branch_coverage=1 00:20:04.191 --rc genhtml_function_coverage=1 00:20:04.191 --rc genhtml_legend=1 00:20:04.191 --rc geninfo_all_blocks=1 00:20:04.191 --rc geninfo_unexecuted_blocks=1 00:20:04.191 00:20:04.191 ' 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.191 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:04.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:04.192 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:09.463 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:09.463 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:09.463 Found net devices under 0000:af:00.0: cvl_0_0 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:09.463 Found net devices under 0000:af:00.1: cvl_0_1 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:09.463 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.464 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.464 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:09.464 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:09.464 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.464 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.464 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:09.464 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:09.464 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.464 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:09.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:20:09.722 00:20:09.722 --- 10.0.0.2 ping statistics --- 00:20:09.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.722 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:20:09.722 00:20:09.722 --- 10.0.0.1 ping statistics --- 00:20:09.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.722 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1940229 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1940229 00:20:09.722 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1940229 ']' 00:20:09.723 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.723 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.723 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.723 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.723 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.981 [2024-12-09 17:30:36.265157] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:20:09.981 [2024-12-09 17:30:36.265206] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.981 [2024-12-09 17:30:36.343539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.981 [2024-12-09 17:30:36.382690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.981 [2024-12-09 17:30:36.382724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.981 [2024-12-09 17:30:36.382731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.981 [2024-12-09 17:30:36.382737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.981 [2024-12-09 17:30:36.382742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.981 [2024-12-09 17:30:36.383213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.981 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.239 Malloc0 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.239 [2024-12-09 17:30:36.561023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.239 [2024-12-09 17:30:36.589214] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.239 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.239 [2024-12-09 17:30:36.673867] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:11.614 Initializing NVMe Controllers 00:20:11.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:11.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:11.614 Initialization complete. Launching workers. 00:20:11.614 ======================================================== 00:20:11.614 Latency(us) 00:20:11.614 Device Information : IOPS MiB/s Average min max 00:20:11.614 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32236.89 7266.61 62851.93 00:20:11.614 ======================================================== 00:20:11.614 Total : 129.00 16.12 32236.89 7266.61 62851.93 00:20:11.614 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.873 rmmod nvme_tcp 00:20:11.873 rmmod nvme_fabrics 00:20:11.873 rmmod nvme_keyring 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1940229 ']' 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1940229 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1940229 ']' 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1940229 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1940229 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1940229' 00:20:11.873 killing process with pid 1940229 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1940229 00:20:11.873 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1940229 00:20:12.133 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:12.133 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:12.133 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:12.133 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:12.133 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:12.133 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:12.133 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:12.133 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:12.133 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:12.133 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.133 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.133 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.039 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:14.039 00:20:14.039 real 0m10.385s 00:20:14.039 user 0m3.965s 00:20:14.039 sys 0m4.876s 00:20:14.039 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.039 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:14.039 ************************************ 00:20:14.039 END TEST nvmf_wait_for_buf 00:20:14.039 ************************************ 00:20:14.298 17:30:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:14.298 17:30:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:14.298 17:30:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:14.298 17:30:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:14.298 17:30:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:14.298 17:30:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:20.869 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:20.869 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:20.869 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:20.870 Found net devices under 0000:af:00.0: cvl_0_0 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:20.870 Found net devices under 0000:af:00.1: cvl_0_1 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:20.870 ************************************ 00:20:20.870 START TEST nvmf_perf_adq 00:20:20.870 ************************************ 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:20.870 * Looking for test storage... 00:20:20.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:20.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.870 --rc genhtml_branch_coverage=1 00:20:20.870 --rc genhtml_function_coverage=1 00:20:20.870 --rc genhtml_legend=1 00:20:20.870 --rc geninfo_all_blocks=1 00:20:20.870 --rc geninfo_unexecuted_blocks=1 00:20:20.870 00:20:20.870 ' 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:20.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.870 --rc genhtml_branch_coverage=1 00:20:20.870 --rc genhtml_function_coverage=1 00:20:20.870 --rc genhtml_legend=1 00:20:20.870 --rc geninfo_all_blocks=1 00:20:20.870 --rc geninfo_unexecuted_blocks=1 00:20:20.870 00:20:20.870 ' 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:20.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.870 --rc genhtml_branch_coverage=1 00:20:20.870 --rc genhtml_function_coverage=1 00:20:20.870 --rc genhtml_legend=1 00:20:20.870 --rc geninfo_all_blocks=1 00:20:20.870 --rc geninfo_unexecuted_blocks=1 00:20:20.870 00:20:20.870 ' 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:20.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.870 --rc genhtml_branch_coverage=1 00:20:20.870 --rc genhtml_function_coverage=1 00:20:20.870 --rc genhtml_legend=1 00:20:20.870 --rc geninfo_all_blocks=1 00:20:20.870 --rc geninfo_unexecuted_blocks=1 00:20:20.870 00:20:20.870 ' 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.870 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:20.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:20.871 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.144 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:26.145 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:26.145 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:26.145 Found net devices under 0000:af:00.0: cvl_0_0 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:26.145 Found net devices under 0000:af:00.1: cvl_0_1 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:26.145 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:27.082 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:29.616 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:34.897 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:34.897 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:34.897 Found net devices under 0000:af:00.0: cvl_0_0 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:34.897 Found net devices under 0000:af:00.1: cvl_0_1 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:34.897 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:34.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:20:34.898 00:20:34.898 --- 10.0.0.2 ping statistics --- 00:20:34.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.898 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:20:34.898 00:20:34.898 --- 10.0.0.1 ping statistics --- 00:20:34.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.898 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:34.898 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1948741 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1948741 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1948741 ']' 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.898 [2024-12-09 17:31:01.084984] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:20:34.898 [2024-12-09 17:31:01.085033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.898 [2024-12-09 17:31:01.163398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.898 [2024-12-09 17:31:01.205718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.898 [2024-12-09 17:31:01.205757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.898 [2024-12-09 17:31:01.205764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.898 [2024-12-09 17:31:01.205770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.898 [2024-12-09 17:31:01.205775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.898 [2024-12-09 17:31:01.207191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.898 [2024-12-09 17:31:01.207256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.898 [2024-12-09 17:31:01.207365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.898 [2024-12-09 17:31:01.207367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.898 [2024-12-09 17:31:01.409135] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.898 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.158 Malloc1 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.158 [2024-12-09 17:31:01.463676] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1948769 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:35.158 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:37.064 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:37.064 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.064 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.064 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.064 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:37.064 "tick_rate": 2100000000, 00:20:37.064 "poll_groups": [ 00:20:37.064 { 00:20:37.064 "name": "nvmf_tgt_poll_group_000", 00:20:37.064 "admin_qpairs": 1, 00:20:37.064 "io_qpairs": 1, 00:20:37.064 "current_admin_qpairs": 1, 00:20:37.064 "current_io_qpairs": 1, 00:20:37.064 "pending_bdev_io": 0, 00:20:37.064 "completed_nvme_io": 19378, 00:20:37.064 "transports": [ 00:20:37.064 { 00:20:37.064 "trtype": "TCP" 00:20:37.064 } 00:20:37.064 ] 00:20:37.064 }, 00:20:37.064 { 00:20:37.064 "name": "nvmf_tgt_poll_group_001", 00:20:37.064 "admin_qpairs": 0, 00:20:37.064 "io_qpairs": 1, 00:20:37.064 "current_admin_qpairs": 0, 00:20:37.064 "current_io_qpairs": 1, 00:20:37.064 "pending_bdev_io": 0, 00:20:37.064 "completed_nvme_io": 19555, 00:20:37.064 "transports": [ 00:20:37.064 { 00:20:37.064 "trtype": "TCP" 00:20:37.064 } 00:20:37.064 ] 00:20:37.064 }, 00:20:37.064 { 00:20:37.064 "name": "nvmf_tgt_poll_group_002", 00:20:37.064 "admin_qpairs": 0, 00:20:37.064 "io_qpairs": 1, 00:20:37.064 "current_admin_qpairs": 0, 00:20:37.064 "current_io_qpairs": 1, 00:20:37.064 "pending_bdev_io": 0, 00:20:37.064 "completed_nvme_io": 19883, 00:20:37.064 "transports": [ 00:20:37.064 { 00:20:37.064 "trtype": "TCP" 00:20:37.064 } 00:20:37.064 ] 00:20:37.064 }, 00:20:37.064 { 00:20:37.064 "name": "nvmf_tgt_poll_group_003", 00:20:37.064 "admin_qpairs": 0, 00:20:37.064 "io_qpairs": 1, 00:20:37.064 "current_admin_qpairs": 0, 00:20:37.064 "current_io_qpairs": 1, 00:20:37.064 "pending_bdev_io": 0, 00:20:37.064 "completed_nvme_io": 19370, 00:20:37.064 "transports": [ 00:20:37.064 { 00:20:37.064 "trtype": "TCP" 00:20:37.064 } 00:20:37.064 ] 00:20:37.064 } 00:20:37.064 ] 00:20:37.064 }' 00:20:37.064 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:37.064 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:37.064 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:37.064 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:37.064 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1948769 00:20:45.289 Initializing NVMe Controllers 00:20:45.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:45.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:45.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:45.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:45.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:45.289 Initialization complete. Launching workers. 00:20:45.289 ======================================================== 00:20:45.289 Latency(us) 00:20:45.289 Device Information : IOPS MiB/s Average min max 00:20:45.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10534.90 41.15 6074.47 1780.28 10417.84 00:20:45.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10431.50 40.75 6135.75 2114.16 11824.50 00:20:45.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10346.50 40.42 6186.51 1855.05 10758.04 00:20:45.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10328.70 40.35 6196.38 2046.02 10233.81 00:20:45.289 ======================================================== 00:20:45.289 Total : 41641.59 162.66 6147.90 1780.28 11824.50 00:20:45.289 00:20:45.289 [2024-12-09 17:31:11.630481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15340 is same with the state(6) to be set 00:20:45.289 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:45.289 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:45.289 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:45.289 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:45.289 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:45.289 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:45.290 rmmod nvme_tcp 00:20:45.290 rmmod nvme_fabrics 00:20:45.290 rmmod nvme_keyring 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1948741 ']' 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1948741 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1948741 ']' 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1948741 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1948741 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1948741' 00:20:45.290 killing process with pid 1948741 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1948741 00:20:45.290 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1948741 00:20:45.549 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:45.549 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:45.549 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:45.549 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:45.549 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:45.549 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:45.549 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:45.549 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:45.549 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:45.549 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.549 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.549 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.086 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:48.086 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:48.086 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:48.086 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:49.023 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:51.556 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:56.833 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:56.834 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:56.834 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:56.834 Found net devices under 0000:af:00.0: cvl_0_0 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:56.834 Found net devices under 0000:af:00.1: cvl_0_1 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:56.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:20:56.834 00:20:56.834 --- 10.0.0.2 ping statistics --- 00:20:56.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.834 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:20:56.834 00:20:56.834 --- 10.0.0.1 ping statistics --- 00:20:56.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.834 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:56.834 net.core.busy_poll = 1 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:56.834 net.core.busy_read = 1 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:56.834 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:56.834 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:56.834 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:56.834 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:56.834 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:56.834 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.834 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.834 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.834 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1952721 00:20:56.834 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1952721 00:20:56.834 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:56.834 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1952721 ']' 00:20:56.834 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.835 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.835 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.835 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.835 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.835 [2024-12-09 17:31:23.215482] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:20:56.835 [2024-12-09 17:31:23.215529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.835 [2024-12-09 17:31:23.291093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.835 [2024-12-09 17:31:23.330086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.835 [2024-12-09 17:31:23.330123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.835 [2024-12-09 17:31:23.330130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.835 [2024-12-09 17:31:23.330136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.835 [2024-12-09 17:31:23.330141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.835 [2024-12-09 17:31:23.331613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.835 [2024-12-09 17:31:23.331720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.835 [2024-12-09 17:31:23.331803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.835 [2024-12-09 17:31:23.331804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.835 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.835 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:56.835 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.835 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.835 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.095 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.095 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:57.095 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:57.095 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:57.095 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.095 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.095 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.096 [2024-12-09 17:31:23.537849] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.096 Malloc1 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.096 [2024-12-09 17:31:23.607068] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1952819 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:57.096 17:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:59.634 17:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:59.634 17:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.634 17:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.634 17:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.634 17:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:59.634 "tick_rate": 2100000000, 00:20:59.634 "poll_groups": [ 00:20:59.634 { 00:20:59.634 "name": "nvmf_tgt_poll_group_000", 00:20:59.634 "admin_qpairs": 1, 00:20:59.634 "io_qpairs": 3, 00:20:59.634 "current_admin_qpairs": 1, 00:20:59.634 "current_io_qpairs": 3, 00:20:59.634 "pending_bdev_io": 0, 00:20:59.634 "completed_nvme_io": 29845, 00:20:59.634 "transports": [ 00:20:59.634 { 00:20:59.634 "trtype": "TCP" 00:20:59.634 } 00:20:59.634 ] 00:20:59.634 }, 00:20:59.634 { 00:20:59.634 "name": "nvmf_tgt_poll_group_001", 00:20:59.635 "admin_qpairs": 0, 00:20:59.635 "io_qpairs": 1, 00:20:59.635 "current_admin_qpairs": 0, 00:20:59.635 "current_io_qpairs": 1, 00:20:59.635 "pending_bdev_io": 0, 00:20:59.635 "completed_nvme_io": 29687, 00:20:59.635 "transports": [ 00:20:59.635 { 00:20:59.635 "trtype": "TCP" 00:20:59.635 } 00:20:59.635 ] 00:20:59.635 }, 00:20:59.635 { 00:20:59.635 "name": "nvmf_tgt_poll_group_002", 00:20:59.635 "admin_qpairs": 0, 00:20:59.635 "io_qpairs": 0, 00:20:59.635 "current_admin_qpairs": 0, 00:20:59.635 "current_io_qpairs": 0, 00:20:59.635 "pending_bdev_io": 0, 00:20:59.635 "completed_nvme_io": 0, 00:20:59.635 "transports": [ 00:20:59.635 { 00:20:59.635 "trtype": "TCP" 00:20:59.635 } 00:20:59.635 ] 00:20:59.635 }, 00:20:59.635 { 00:20:59.635 "name": "nvmf_tgt_poll_group_003", 00:20:59.635 "admin_qpairs": 0, 00:20:59.635 "io_qpairs": 0, 00:20:59.635 "current_admin_qpairs": 0, 00:20:59.635 "current_io_qpairs": 0, 00:20:59.635 "pending_bdev_io": 0, 00:20:59.635 "completed_nvme_io": 0, 00:20:59.635 "transports": [ 00:20:59.635 { 00:20:59.635 "trtype": "TCP" 00:20:59.635 } 00:20:59.635 ] 00:20:59.635 } 00:20:59.635 ] 00:20:59.635 }' 00:20:59.635 17:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:59.635 17:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:59.635 17:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:59.635 17:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:59.635 17:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1952819 00:21:07.762 Initializing NVMe Controllers 00:21:07.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:07.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:07.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:07.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:07.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:07.762 Initialization complete. Launching workers. 00:21:07.762 ======================================================== 00:21:07.762 Latency(us) 00:21:07.762 Device Information : IOPS MiB/s Average min max 00:21:07.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5069.80 19.80 12635.61 1874.00 57845.34 00:21:07.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5541.70 21.65 11583.18 1862.03 57892.66 00:21:07.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4870.80 19.03 13182.94 1808.96 60047.90 00:21:07.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15859.50 61.95 4034.76 1842.86 6836.85 00:21:07.762 ======================================================== 00:21:07.762 Total : 31341.79 122.43 8182.41 1808.96 60047.90 00:21:07.762 00:21:07.762 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:07.762 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.762 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:07.762 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.762 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:07.762 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.763 rmmod nvme_tcp 00:21:07.763 rmmod nvme_fabrics 00:21:07.763 rmmod nvme_keyring 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1952721 ']' 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1952721 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1952721 ']' 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1952721 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1952721 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1952721' 00:21:07.763 killing process with pid 1952721 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1952721 00:21:07.763 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1952721 00:21:07.763 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.763 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:07.763 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:07.763 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:07.763 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:07.763 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:07.763 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:07.763 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.763 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:07.763 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.763 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.763 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:11.054 00:21:11.054 real 0m50.915s 00:21:11.054 user 2m44.044s 00:21:11.054 sys 0m10.183s 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.054 ************************************ 00:21:11.054 END TEST nvmf_perf_adq 00:21:11.054 ************************************ 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:11.054 ************************************ 00:21:11.054 START TEST nvmf_shutdown 00:21:11.054 ************************************ 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:11.054 * Looking for test storage... 00:21:11.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:11.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.054 --rc genhtml_branch_coverage=1 00:21:11.054 --rc genhtml_function_coverage=1 00:21:11.054 --rc genhtml_legend=1 00:21:11.054 --rc geninfo_all_blocks=1 00:21:11.054 --rc geninfo_unexecuted_blocks=1 00:21:11.054 00:21:11.054 ' 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:11.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.054 --rc genhtml_branch_coverage=1 00:21:11.054 --rc genhtml_function_coverage=1 00:21:11.054 --rc genhtml_legend=1 00:21:11.054 --rc geninfo_all_blocks=1 00:21:11.054 --rc geninfo_unexecuted_blocks=1 00:21:11.054 00:21:11.054 ' 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:11.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.054 --rc genhtml_branch_coverage=1 00:21:11.054 --rc genhtml_function_coverage=1 00:21:11.054 --rc genhtml_legend=1 00:21:11.054 --rc geninfo_all_blocks=1 00:21:11.054 --rc geninfo_unexecuted_blocks=1 00:21:11.054 00:21:11.054 ' 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:11.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.054 --rc genhtml_branch_coverage=1 00:21:11.054 --rc genhtml_function_coverage=1 00:21:11.054 --rc genhtml_legend=1 00:21:11.054 --rc geninfo_all_blocks=1 00:21:11.054 --rc geninfo_unexecuted_blocks=1 00:21:11.054 00:21:11.054 ' 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.054 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:11.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:11.055 ************************************ 00:21:11.055 START TEST nvmf_shutdown_tc1 00:21:11.055 ************************************ 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:11.055 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.626 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:17.627 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:17.627 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:17.627 Found net devices under 0000:af:00.0: cvl_0_0 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:17.627 Found net devices under 0000:af:00.1: cvl_0_1 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:17.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:21:17.627 00:21:17.627 --- 10.0.0.2 ping statistics --- 00:21:17.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.627 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:21:17.627 00:21:17.627 --- 10.0.0.1 ping statistics --- 00:21:17.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.627 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:17.627 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1958167 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1958167 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1958167 ']' 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.628 [2024-12-09 17:31:43.613873] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:21:17.628 [2024-12-09 17:31:43.613916] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.628 [2024-12-09 17:31:43.692886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.628 [2024-12-09 17:31:43.733454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.628 [2024-12-09 17:31:43.733489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.628 [2024-12-09 17:31:43.733496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.628 [2024-12-09 17:31:43.733502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.628 [2024-12-09 17:31:43.733507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.628 [2024-12-09 17:31:43.734983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.628 [2024-12-09 17:31:43.735090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.628 [2024-12-09 17:31:43.735227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.628 [2024-12-09 17:31:43.735228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.628 [2024-12-09 17:31:43.871278] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.628 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.628 Malloc1 00:21:17.628 [2024-12-09 17:31:43.995748] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.628 Malloc2 00:21:17.628 Malloc3 00:21:17.628 Malloc4 00:21:17.628 Malloc5 00:21:17.887 Malloc6 00:21:17.887 Malloc7 00:21:17.887 Malloc8 00:21:17.887 Malloc9 00:21:17.887 Malloc10 00:21:17.887 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.887 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:17.887 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.887 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1958432 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1958432 /var/tmp/bdevperf.sock 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1958432 ']' 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.146 { 00:21:18.146 "params": { 00:21:18.146 "name": "Nvme$subsystem", 00:21:18.146 "trtype": "$TEST_TRANSPORT", 00:21:18.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.146 "adrfam": "ipv4", 00:21:18.146 "trsvcid": "$NVMF_PORT", 00:21:18.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.146 "hdgst": ${hdgst:-false}, 00:21:18.146 "ddgst": ${ddgst:-false} 00:21:18.146 }, 00:21:18.146 "method": "bdev_nvme_attach_controller" 00:21:18.146 } 00:21:18.146 EOF 00:21:18.146 )") 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.146 { 00:21:18.146 "params": { 00:21:18.146 "name": "Nvme$subsystem", 00:21:18.146 "trtype": "$TEST_TRANSPORT", 00:21:18.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.146 "adrfam": "ipv4", 00:21:18.146 "trsvcid": "$NVMF_PORT", 00:21:18.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.146 "hdgst": ${hdgst:-false}, 00:21:18.146 "ddgst": ${ddgst:-false} 00:21:18.146 }, 00:21:18.146 "method": "bdev_nvme_attach_controller" 00:21:18.146 } 00:21:18.146 EOF 00:21:18.146 )") 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.146 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.146 { 00:21:18.146 "params": { 00:21:18.146 "name": "Nvme$subsystem", 00:21:18.146 "trtype": "$TEST_TRANSPORT", 00:21:18.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.146 "adrfam": "ipv4", 00:21:18.146 "trsvcid": "$NVMF_PORT", 00:21:18.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.147 "hdgst": ${hdgst:-false}, 00:21:18.147 "ddgst": ${ddgst:-false} 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 } 00:21:18.147 EOF 00:21:18.147 )") 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.147 { 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme$subsystem", 00:21:18.147 "trtype": "$TEST_TRANSPORT", 00:21:18.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "$NVMF_PORT", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.147 "hdgst": ${hdgst:-false}, 00:21:18.147 "ddgst": ${ddgst:-false} 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 } 00:21:18.147 EOF 00:21:18.147 )") 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.147 { 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme$subsystem", 00:21:18.147 "trtype": "$TEST_TRANSPORT", 00:21:18.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "$NVMF_PORT", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.147 "hdgst": ${hdgst:-false}, 00:21:18.147 "ddgst": ${ddgst:-false} 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 } 00:21:18.147 EOF 00:21:18.147 )") 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.147 { 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme$subsystem", 00:21:18.147 "trtype": "$TEST_TRANSPORT", 00:21:18.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "$NVMF_PORT", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.147 "hdgst": ${hdgst:-false}, 00:21:18.147 "ddgst": ${ddgst:-false} 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 } 00:21:18.147 EOF 00:21:18.147 )") 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.147 { 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme$subsystem", 00:21:18.147 "trtype": "$TEST_TRANSPORT", 00:21:18.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "$NVMF_PORT", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.147 "hdgst": ${hdgst:-false}, 00:21:18.147 "ddgst": ${ddgst:-false} 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 } 00:21:18.147 EOF 00:21:18.147 )") 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.147 [2024-12-09 17:31:44.476582] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:21:18.147 [2024-12-09 17:31:44.476631] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.147 { 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme$subsystem", 00:21:18.147 "trtype": "$TEST_TRANSPORT", 00:21:18.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "$NVMF_PORT", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.147 "hdgst": ${hdgst:-false}, 00:21:18.147 "ddgst": ${ddgst:-false} 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 } 00:21:18.147 EOF 00:21:18.147 )") 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.147 { 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme$subsystem", 00:21:18.147 "trtype": "$TEST_TRANSPORT", 00:21:18.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "$NVMF_PORT", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.147 "hdgst": ${hdgst:-false}, 00:21:18.147 "ddgst": ${ddgst:-false} 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 } 00:21:18.147 EOF 00:21:18.147 )") 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.147 { 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme$subsystem", 00:21:18.147 "trtype": "$TEST_TRANSPORT", 00:21:18.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "$NVMF_PORT", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.147 "hdgst": ${hdgst:-false}, 00:21:18.147 "ddgst": ${ddgst:-false} 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 } 00:21:18.147 EOF 00:21:18.147 )") 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:18.147 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme1", 00:21:18.147 "trtype": "tcp", 00:21:18.147 "traddr": "10.0.0.2", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "4420", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.147 "hdgst": false, 00:21:18.147 "ddgst": false 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 },{ 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme2", 00:21:18.147 "trtype": "tcp", 00:21:18.147 "traddr": "10.0.0.2", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "4420", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:18.147 "hdgst": false, 00:21:18.147 "ddgst": false 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 },{ 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme3", 00:21:18.147 "trtype": "tcp", 00:21:18.147 "traddr": "10.0.0.2", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "4420", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:18.147 "hdgst": false, 00:21:18.147 "ddgst": false 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 },{ 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme4", 00:21:18.147 "trtype": "tcp", 00:21:18.147 "traddr": "10.0.0.2", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "4420", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:18.147 "hdgst": false, 00:21:18.147 "ddgst": false 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 },{ 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme5", 00:21:18.147 "trtype": "tcp", 00:21:18.147 "traddr": "10.0.0.2", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "4420", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:18.147 "hdgst": false, 00:21:18.147 "ddgst": false 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.147 },{ 00:21:18.147 "params": { 00:21:18.147 "name": "Nvme6", 00:21:18.147 "trtype": "tcp", 00:21:18.147 "traddr": "10.0.0.2", 00:21:18.147 "adrfam": "ipv4", 00:21:18.147 "trsvcid": "4420", 00:21:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:18.147 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:18.147 "hdgst": false, 00:21:18.147 "ddgst": false 00:21:18.147 }, 00:21:18.147 "method": "bdev_nvme_attach_controller" 00:21:18.148 },{ 00:21:18.148 "params": { 00:21:18.148 "name": "Nvme7", 00:21:18.148 "trtype": "tcp", 00:21:18.148 "traddr": "10.0.0.2", 00:21:18.148 "adrfam": "ipv4", 00:21:18.148 "trsvcid": "4420", 00:21:18.148 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:18.148 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:18.148 "hdgst": false, 00:21:18.148 "ddgst": false 00:21:18.148 }, 00:21:18.148 "method": "bdev_nvme_attach_controller" 00:21:18.148 },{ 00:21:18.148 "params": { 00:21:18.148 "name": "Nvme8", 00:21:18.148 "trtype": "tcp", 00:21:18.148 "traddr": "10.0.0.2", 00:21:18.148 "adrfam": "ipv4", 00:21:18.148 "trsvcid": "4420", 00:21:18.148 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:18.148 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:18.148 "hdgst": false, 00:21:18.148 "ddgst": false 00:21:18.148 }, 00:21:18.148 "method": "bdev_nvme_attach_controller" 00:21:18.148 },{ 00:21:18.148 "params": { 00:21:18.148 "name": "Nvme9", 00:21:18.148 "trtype": "tcp", 00:21:18.148 "traddr": "10.0.0.2", 00:21:18.148 "adrfam": "ipv4", 00:21:18.148 "trsvcid": "4420", 00:21:18.148 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:18.148 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:18.148 "hdgst": false, 00:21:18.148 "ddgst": false 00:21:18.148 }, 00:21:18.148 "method": "bdev_nvme_attach_controller" 00:21:18.148 },{ 00:21:18.148 "params": { 00:21:18.148 "name": "Nvme10", 00:21:18.148 "trtype": "tcp", 00:21:18.148 "traddr": "10.0.0.2", 00:21:18.148 "adrfam": "ipv4", 00:21:18.148 "trsvcid": "4420", 00:21:18.148 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:18.148 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:18.148 "hdgst": false, 00:21:18.148 "ddgst": false 00:21:18.148 }, 00:21:18.148 "method": "bdev_nvme_attach_controller" 00:21:18.148 }' 00:21:18.148 [2024-12-09 17:31:44.553875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.148 [2024-12-09 17:31:44.593334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.051 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.051 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:20.051 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:20.051 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.051 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:20.051 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.051 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1958432 00:21:20.051 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:20.051 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:20.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1958432 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1958167 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.987 { 00:21:20.987 "params": { 00:21:20.987 "name": "Nvme$subsystem", 00:21:20.987 "trtype": "$TEST_TRANSPORT", 00:21:20.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.987 "adrfam": "ipv4", 00:21:20.987 "trsvcid": "$NVMF_PORT", 00:21:20.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.987 "hdgst": ${hdgst:-false}, 00:21:20.987 "ddgst": ${ddgst:-false} 00:21:20.987 }, 00:21:20.987 "method": "bdev_nvme_attach_controller" 00:21:20.987 } 00:21:20.987 EOF 00:21:20.987 )") 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.987 { 00:21:20.987 "params": { 00:21:20.987 "name": "Nvme$subsystem", 00:21:20.987 "trtype": "$TEST_TRANSPORT", 00:21:20.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.987 "adrfam": "ipv4", 00:21:20.987 "trsvcid": "$NVMF_PORT", 00:21:20.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.987 "hdgst": ${hdgst:-false}, 00:21:20.987 "ddgst": ${ddgst:-false} 00:21:20.987 }, 00:21:20.987 "method": "bdev_nvme_attach_controller" 00:21:20.987 } 00:21:20.987 EOF 00:21:20.987 )") 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.987 { 00:21:20.987 "params": { 00:21:20.987 "name": "Nvme$subsystem", 00:21:20.987 "trtype": "$TEST_TRANSPORT", 00:21:20.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.987 "adrfam": "ipv4", 00:21:20.987 "trsvcid": "$NVMF_PORT", 00:21:20.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.987 "hdgst": ${hdgst:-false}, 00:21:20.987 "ddgst": ${ddgst:-false} 00:21:20.987 }, 00:21:20.987 "method": "bdev_nvme_attach_controller" 00:21:20.987 } 00:21:20.987 EOF 00:21:20.987 )") 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.987 { 00:21:20.987 "params": { 00:21:20.987 "name": "Nvme$subsystem", 00:21:20.987 "trtype": "$TEST_TRANSPORT", 00:21:20.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.987 "adrfam": "ipv4", 00:21:20.987 "trsvcid": "$NVMF_PORT", 00:21:20.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.987 "hdgst": ${hdgst:-false}, 00:21:20.987 "ddgst": ${ddgst:-false} 00:21:20.987 }, 00:21:20.987 "method": "bdev_nvme_attach_controller" 00:21:20.987 } 00:21:20.987 EOF 00:21:20.987 )") 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.987 { 00:21:20.987 "params": { 00:21:20.987 "name": "Nvme$subsystem", 00:21:20.987 "trtype": "$TEST_TRANSPORT", 00:21:20.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.987 "adrfam": "ipv4", 00:21:20.987 "trsvcid": "$NVMF_PORT", 00:21:20.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.987 "hdgst": ${hdgst:-false}, 00:21:20.987 "ddgst": ${ddgst:-false} 00:21:20.987 }, 00:21:20.987 "method": "bdev_nvme_attach_controller" 00:21:20.987 } 00:21:20.987 EOF 00:21:20.987 )") 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.987 { 00:21:20.987 "params": { 00:21:20.987 "name": "Nvme$subsystem", 00:21:20.987 "trtype": "$TEST_TRANSPORT", 00:21:20.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.987 "adrfam": "ipv4", 00:21:20.987 "trsvcid": "$NVMF_PORT", 00:21:20.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.987 "hdgst": ${hdgst:-false}, 00:21:20.987 "ddgst": ${ddgst:-false} 00:21:20.987 }, 00:21:20.987 "method": "bdev_nvme_attach_controller" 00:21:20.987 } 00:21:20.987 EOF 00:21:20.987 )") 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.987 { 00:21:20.987 "params": { 00:21:20.987 "name": "Nvme$subsystem", 00:21:20.987 "trtype": "$TEST_TRANSPORT", 00:21:20.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.987 "adrfam": "ipv4", 00:21:20.987 "trsvcid": "$NVMF_PORT", 00:21:20.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.987 "hdgst": ${hdgst:-false}, 00:21:20.987 "ddgst": ${ddgst:-false} 00:21:20.987 }, 00:21:20.987 "method": "bdev_nvme_attach_controller" 00:21:20.987 } 00:21:20.987 EOF 00:21:20.987 )") 00:21:20.987 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.987 [2024-12-09 17:31:47.399252] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:21:20.987 [2024-12-09 17:31:47.399301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1958912 ] 00:21:20.988 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.988 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.988 { 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme$subsystem", 00:21:20.988 "trtype": "$TEST_TRANSPORT", 00:21:20.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "$NVMF_PORT", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.988 "hdgst": ${hdgst:-false}, 00:21:20.988 "ddgst": ${ddgst:-false} 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 } 00:21:20.988 EOF 00:21:20.988 )") 00:21:20.988 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.988 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.988 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.988 { 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme$subsystem", 00:21:20.988 "trtype": "$TEST_TRANSPORT", 00:21:20.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "$NVMF_PORT", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.988 "hdgst": ${hdgst:-false}, 00:21:20.988 "ddgst": ${ddgst:-false} 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 } 00:21:20.988 EOF 00:21:20.988 )") 00:21:20.988 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.988 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.988 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.988 { 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme$subsystem", 00:21:20.988 "trtype": "$TEST_TRANSPORT", 00:21:20.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "$NVMF_PORT", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.988 "hdgst": ${hdgst:-false}, 00:21:20.988 "ddgst": ${ddgst:-false} 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 } 00:21:20.988 EOF 00:21:20.988 )") 00:21:20.988 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.988 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:20.988 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:20.988 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme1", 00:21:20.988 "trtype": "tcp", 00:21:20.988 "traddr": "10.0.0.2", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "4420", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.988 "hdgst": false, 00:21:20.988 "ddgst": false 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 },{ 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme2", 00:21:20.988 "trtype": "tcp", 00:21:20.988 "traddr": "10.0.0.2", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "4420", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:20.988 "hdgst": false, 00:21:20.988 "ddgst": false 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 },{ 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme3", 00:21:20.988 "trtype": "tcp", 00:21:20.988 "traddr": "10.0.0.2", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "4420", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:20.988 "hdgst": false, 00:21:20.988 "ddgst": false 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 },{ 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme4", 00:21:20.988 "trtype": "tcp", 00:21:20.988 "traddr": "10.0.0.2", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "4420", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:20.988 "hdgst": false, 00:21:20.988 "ddgst": false 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 },{ 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme5", 00:21:20.988 "trtype": "tcp", 00:21:20.988 "traddr": "10.0.0.2", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "4420", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:20.988 "hdgst": false, 00:21:20.988 "ddgst": false 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 },{ 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme6", 00:21:20.988 "trtype": "tcp", 00:21:20.988 "traddr": "10.0.0.2", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "4420", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:20.988 "hdgst": false, 00:21:20.988 "ddgst": false 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 },{ 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme7", 00:21:20.988 "trtype": "tcp", 00:21:20.988 "traddr": "10.0.0.2", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "4420", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:20.988 "hdgst": false, 00:21:20.988 "ddgst": false 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 },{ 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme8", 00:21:20.988 "trtype": "tcp", 00:21:20.988 "traddr": "10.0.0.2", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "4420", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:20.988 "hdgst": false, 00:21:20.988 "ddgst": false 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 },{ 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme9", 00:21:20.988 "trtype": "tcp", 00:21:20.988 "traddr": "10.0.0.2", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "4420", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:20.988 "hdgst": false, 00:21:20.988 "ddgst": false 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 },{ 00:21:20.988 "params": { 00:21:20.988 "name": "Nvme10", 00:21:20.988 "trtype": "tcp", 00:21:20.988 "traddr": "10.0.0.2", 00:21:20.988 "adrfam": "ipv4", 00:21:20.988 "trsvcid": "4420", 00:21:20.988 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:20.988 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:20.988 "hdgst": false, 00:21:20.988 "ddgst": false 00:21:20.988 }, 00:21:20.988 "method": "bdev_nvme_attach_controller" 00:21:20.988 }' 00:21:20.988 [2024-12-09 17:31:47.473916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.988 [2024-12-09 17:31:47.513254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.365 Running I/O for 1 seconds... 00:21:23.558 2260.00 IOPS, 141.25 MiB/s 00:21:23.558 Latency(us) 00:21:23.558 [2024-12-09T16:31:50.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.558 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.558 Verification LBA range: start 0x0 length 0x400 00:21:23.558 Nvme1n1 : 1.13 286.44 17.90 0.00 0.00 219002.30 8238.81 208716.56 00:21:23.558 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.558 Verification LBA range: start 0x0 length 0x400 00:21:23.558 Nvme2n1 : 1.13 283.05 17.69 0.00 0.00 220984.86 17476.27 212711.13 00:21:23.558 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.558 Verification LBA range: start 0x0 length 0x400 00:21:23.558 Nvme3n1 : 1.12 284.78 17.80 0.00 0.00 216499.44 15042.07 211712.49 00:21:23.558 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.558 Verification LBA range: start 0x0 length 0x400 00:21:23.558 Nvme4n1 : 1.10 297.08 18.57 0.00 0.00 203534.75 5586.16 212711.13 00:21:23.558 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.558 Verification LBA range: start 0x0 length 0x400 00:21:23.558 Nvme5n1 : 1.07 238.55 14.91 0.00 0.00 250378.48 19598.38 226692.14 00:21:23.558 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.558 Verification LBA range: start 0x0 length 0x400 00:21:23.558 Nvme6n1 : 1.14 283.53 17.72 0.00 0.00 207885.03 3120.76 231685.36 00:21:23.558 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.559 Verification LBA range: start 0x0 length 0x400 00:21:23.559 Nvme7n1 : 1.14 280.26 17.52 0.00 0.00 207833.97 14355.50 226692.14 00:21:23.559 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.559 Verification LBA range: start 0x0 length 0x400 00:21:23.559 Nvme8n1 : 1.14 284.32 17.77 0.00 0.00 201204.26 5523.75 213709.78 00:21:23.559 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.559 Verification LBA range: start 0x0 length 0x400 00:21:23.559 Nvme9n1 : 1.15 281.81 17.61 0.00 0.00 200873.96 1224.90 237677.23 00:21:23.559 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.559 Verification LBA range: start 0x0 length 0x400 00:21:23.559 Nvme10n1 : 1.15 278.99 17.44 0.00 0.00 199836.82 15478.98 217704.35 00:21:23.559 [2024-12-09T16:31:50.099Z] =================================================================================================================== 00:21:23.559 [2024-12-09T16:31:50.099Z] Total : 2798.80 174.92 0.00 0.00 212001.74 1224.90 237677.23 00:21:23.817 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:23.817 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:23.817 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:23.817 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:23.817 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:23.817 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:23.817 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:23.817 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:23.818 rmmod nvme_tcp 00:21:23.818 rmmod nvme_fabrics 00:21:23.818 rmmod nvme_keyring 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1958167 ']' 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1958167 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1958167 ']' 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1958167 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1958167 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1958167' 00:21:23.818 killing process with pid 1958167 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1958167 00:21:23.818 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1958167 00:21:24.385 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:24.385 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:24.385 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:24.385 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:24.385 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:24.385 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:24.385 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:24.385 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:24.385 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:24.385 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.385 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.385 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.287 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:26.287 00:21:26.287 real 0m15.224s 00:21:26.287 user 0m33.359s 00:21:26.287 sys 0m5.815s 00:21:26.287 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.287 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.287 ************************************ 00:21:26.287 END TEST nvmf_shutdown_tc1 00:21:26.287 ************************************ 00:21:26.287 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:26.287 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:26.287 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.287 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:26.287 ************************************ 00:21:26.287 START TEST nvmf_shutdown_tc2 00:21:26.287 ************************************ 00:21:26.287 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:26.287 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:26.287 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.288 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:26.546 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:26.546 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:26.546 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:26.546 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:26.546 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:26.546 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:26.546 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:26.546 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:26.546 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:26.546 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:26.547 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:26.547 Found net devices under 0000:af:00.0: cvl_0_0 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:26.547 Found net devices under 0000:af:00.1: cvl_0_1 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:26.547 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:26.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:21:26.547 00:21:26.547 --- 10.0.0.2 ping statistics --- 00:21:26.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.547 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:21:26.547 00:21:26.547 --- 10.0.0.1 ping statistics --- 00:21:26.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.547 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:26.547 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1959911 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1959911 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1959911 ']' 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.806 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.806 [2024-12-09 17:31:53.157246] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:21:26.806 [2024-12-09 17:31:53.157288] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.806 [2024-12-09 17:31:53.235211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.806 [2024-12-09 17:31:53.276511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.806 [2024-12-09 17:31:53.276542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.806 [2024-12-09 17:31:53.276551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.806 [2024-12-09 17:31:53.276558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.806 [2024-12-09 17:31:53.276564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.806 [2024-12-09 17:31:53.277800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.806 [2024-12-09 17:31:53.277933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.806 [2024-12-09 17:31:53.278041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.806 [2024-12-09 17:31:53.278042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:27.743 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.743 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:27.743 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.743 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.743 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 [2024-12-09 17:31:54.040116] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.743 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 Malloc1 00:21:27.743 [2024-12-09 17:31:54.154877] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.743 Malloc2 00:21:27.743 Malloc3 00:21:27.743 Malloc4 00:21:28.003 Malloc5 00:21:28.003 Malloc6 00:21:28.003 Malloc7 00:21:28.003 Malloc8 00:21:28.003 Malloc9 00:21:28.003 Malloc10 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1960186 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1960186 /var/tmp/bdevperf.sock 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1960186 ']' 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:28.263 { 00:21:28.263 "params": { 00:21:28.263 "name": "Nvme$subsystem", 00:21:28.263 "trtype": "$TEST_TRANSPORT", 00:21:28.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.263 "adrfam": "ipv4", 00:21:28.263 "trsvcid": "$NVMF_PORT", 00:21:28.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.263 "hdgst": ${hdgst:-false}, 00:21:28.263 "ddgst": ${ddgst:-false} 00:21:28.263 }, 00:21:28.263 "method": "bdev_nvme_attach_controller" 00:21:28.263 } 00:21:28.263 EOF 00:21:28.263 )") 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:28.263 { 00:21:28.263 "params": { 00:21:28.263 "name": "Nvme$subsystem", 00:21:28.263 "trtype": "$TEST_TRANSPORT", 00:21:28.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.263 "adrfam": "ipv4", 00:21:28.263 "trsvcid": "$NVMF_PORT", 00:21:28.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.263 "hdgst": ${hdgst:-false}, 00:21:28.263 "ddgst": ${ddgst:-false} 00:21:28.263 }, 00:21:28.263 "method": "bdev_nvme_attach_controller" 00:21:28.263 } 00:21:28.263 EOF 00:21:28.263 )") 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:28.263 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:28.263 { 00:21:28.263 "params": { 00:21:28.263 "name": "Nvme$subsystem", 00:21:28.263 "trtype": "$TEST_TRANSPORT", 00:21:28.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.263 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "$NVMF_PORT", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.264 "hdgst": ${hdgst:-false}, 00:21:28.264 "ddgst": ${ddgst:-false} 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 } 00:21:28.264 EOF 00:21:28.264 )") 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:28.264 { 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme$subsystem", 00:21:28.264 "trtype": "$TEST_TRANSPORT", 00:21:28.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "$NVMF_PORT", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.264 "hdgst": ${hdgst:-false}, 00:21:28.264 "ddgst": ${ddgst:-false} 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 } 00:21:28.264 EOF 00:21:28.264 )") 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:28.264 { 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme$subsystem", 00:21:28.264 "trtype": "$TEST_TRANSPORT", 00:21:28.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "$NVMF_PORT", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.264 "hdgst": ${hdgst:-false}, 00:21:28.264 "ddgst": ${ddgst:-false} 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 } 00:21:28.264 EOF 00:21:28.264 )") 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:28.264 { 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme$subsystem", 00:21:28.264 "trtype": "$TEST_TRANSPORT", 00:21:28.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "$NVMF_PORT", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.264 "hdgst": ${hdgst:-false}, 00:21:28.264 "ddgst": ${ddgst:-false} 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 } 00:21:28.264 EOF 00:21:28.264 )") 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:28.264 { 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme$subsystem", 00:21:28.264 "trtype": "$TEST_TRANSPORT", 00:21:28.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "$NVMF_PORT", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.264 "hdgst": ${hdgst:-false}, 00:21:28.264 "ddgst": ${ddgst:-false} 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 } 00:21:28.264 EOF 00:21:28.264 )") 00:21:28.264 [2024-12-09 17:31:54.627483] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:21:28.264 [2024-12-09 17:31:54.627531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1960186 ] 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:28.264 { 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme$subsystem", 00:21:28.264 "trtype": "$TEST_TRANSPORT", 00:21:28.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "$NVMF_PORT", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.264 "hdgst": ${hdgst:-false}, 00:21:28.264 "ddgst": ${ddgst:-false} 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 } 00:21:28.264 EOF 00:21:28.264 )") 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:28.264 { 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme$subsystem", 00:21:28.264 "trtype": "$TEST_TRANSPORT", 00:21:28.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "$NVMF_PORT", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.264 "hdgst": ${hdgst:-false}, 00:21:28.264 "ddgst": ${ddgst:-false} 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 } 00:21:28.264 EOF 00:21:28.264 )") 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:28.264 { 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme$subsystem", 00:21:28.264 "trtype": "$TEST_TRANSPORT", 00:21:28.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "$NVMF_PORT", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.264 "hdgst": ${hdgst:-false}, 00:21:28.264 "ddgst": ${ddgst:-false} 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 } 00:21:28.264 EOF 00:21:28.264 )") 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:28.264 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme1", 00:21:28.264 "trtype": "tcp", 00:21:28.264 "traddr": "10.0.0.2", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "4420", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.264 "hdgst": false, 00:21:28.264 "ddgst": false 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 },{ 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme2", 00:21:28.264 "trtype": "tcp", 00:21:28.264 "traddr": "10.0.0.2", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "4420", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:28.264 "hdgst": false, 00:21:28.264 "ddgst": false 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 },{ 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme3", 00:21:28.264 "trtype": "tcp", 00:21:28.264 "traddr": "10.0.0.2", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "4420", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:28.264 "hdgst": false, 00:21:28.264 "ddgst": false 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 },{ 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme4", 00:21:28.264 "trtype": "tcp", 00:21:28.264 "traddr": "10.0.0.2", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "4420", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:28.264 "hdgst": false, 00:21:28.264 "ddgst": false 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 },{ 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme5", 00:21:28.264 "trtype": "tcp", 00:21:28.264 "traddr": "10.0.0.2", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "4420", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:28.264 "hdgst": false, 00:21:28.264 "ddgst": false 00:21:28.264 }, 00:21:28.264 "method": "bdev_nvme_attach_controller" 00:21:28.264 },{ 00:21:28.264 "params": { 00:21:28.264 "name": "Nvme6", 00:21:28.264 "trtype": "tcp", 00:21:28.264 "traddr": "10.0.0.2", 00:21:28.264 "adrfam": "ipv4", 00:21:28.264 "trsvcid": "4420", 00:21:28.264 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:28.264 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:28.264 "hdgst": false, 00:21:28.265 "ddgst": false 00:21:28.265 }, 00:21:28.265 "method": "bdev_nvme_attach_controller" 00:21:28.265 },{ 00:21:28.265 "params": { 00:21:28.265 "name": "Nvme7", 00:21:28.265 "trtype": "tcp", 00:21:28.265 "traddr": "10.0.0.2", 00:21:28.265 "adrfam": "ipv4", 00:21:28.265 "trsvcid": "4420", 00:21:28.265 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:28.265 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:28.265 "hdgst": false, 00:21:28.265 "ddgst": false 00:21:28.265 }, 00:21:28.265 "method": "bdev_nvme_attach_controller" 00:21:28.265 },{ 00:21:28.265 "params": { 00:21:28.265 "name": "Nvme8", 00:21:28.265 "trtype": "tcp", 00:21:28.265 "traddr": "10.0.0.2", 00:21:28.265 "adrfam": "ipv4", 00:21:28.265 "trsvcid": "4420", 00:21:28.265 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:28.265 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:28.265 "hdgst": false, 00:21:28.265 "ddgst": false 00:21:28.265 }, 00:21:28.265 "method": "bdev_nvme_attach_controller" 00:21:28.265 },{ 00:21:28.265 "params": { 00:21:28.265 "name": "Nvme9", 00:21:28.265 "trtype": "tcp", 00:21:28.265 "traddr": "10.0.0.2", 00:21:28.265 "adrfam": "ipv4", 00:21:28.265 "trsvcid": "4420", 00:21:28.265 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:28.265 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:28.265 "hdgst": false, 00:21:28.265 "ddgst": false 00:21:28.265 }, 00:21:28.265 "method": "bdev_nvme_attach_controller" 00:21:28.265 },{ 00:21:28.265 "params": { 00:21:28.265 "name": "Nvme10", 00:21:28.265 "trtype": "tcp", 00:21:28.265 "traddr": "10.0.0.2", 00:21:28.265 "adrfam": "ipv4", 00:21:28.265 "trsvcid": "4420", 00:21:28.265 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:28.265 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:28.265 "hdgst": false, 00:21:28.265 "ddgst": false 00:21:28.265 }, 00:21:28.265 "method": "bdev_nvme_attach_controller" 00:21:28.265 }' 00:21:28.265 [2024-12-09 17:31:54.705391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.265 [2024-12-09 17:31:54.745160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.170 Running I/O for 10 seconds... 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:30.170 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:30.430 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:30.430 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:30.430 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:30.430 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:30.430 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.430 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.689 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.689 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:30.689 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:30.689 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1960186 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1960186 ']' 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1960186 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1960186 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1960186' 00:21:30.948 killing process with pid 1960186 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1960186 00:21:30.948 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1960186 00:21:30.948 Received shutdown signal, test time was about 0.956431 seconds 00:21:30.948 00:21:30.948 Latency(us) 00:21:30.948 [2024-12-09T16:31:57.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.948 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.948 Verification LBA range: start 0x0 length 0x400 00:21:30.948 Nvme1n1 : 0.94 273.18 17.07 0.00 0.00 231731.93 17476.27 211712.49 00:21:30.948 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.948 Verification LBA range: start 0x0 length 0x400 00:21:30.948 Nvme2n1 : 0.93 285.94 17.87 0.00 0.00 214476.00 6647.22 192738.26 00:21:30.948 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.948 Verification LBA range: start 0x0 length 0x400 00:21:30.948 Nvme3n1 : 0.95 335.78 20.99 0.00 0.00 181665.79 16227.96 215707.06 00:21:30.948 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.948 Verification LBA range: start 0x0 length 0x400 00:21:30.948 Nvme4n1 : 0.92 277.53 17.35 0.00 0.00 216093.99 14792.41 222697.57 00:21:30.948 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.948 Verification LBA range: start 0x0 length 0x400 00:21:30.948 Nvme5n1 : 0.94 271.05 16.94 0.00 0.00 218009.36 23093.64 214708.42 00:21:30.948 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.948 Verification LBA range: start 0x0 length 0x400 00:21:30.948 Nvme6n1 : 0.94 276.23 17.26 0.00 0.00 209732.12 2871.10 206719.27 00:21:30.948 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.948 Verification LBA range: start 0x0 length 0x400 00:21:30.948 Nvme7n1 : 0.93 275.11 17.19 0.00 0.00 206710.49 16976.94 209715.20 00:21:30.948 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.948 Verification LBA range: start 0x0 length 0x400 00:21:30.948 Nvme8n1 : 0.95 270.35 16.90 0.00 0.00 207037.68 14917.24 215707.06 00:21:30.948 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.948 Verification LBA range: start 0x0 length 0x400 00:21:30.948 Nvme9n1 : 0.95 269.47 16.84 0.00 0.00 203389.56 17476.27 220700.28 00:21:30.948 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.948 Verification LBA range: start 0x0 length 0x400 00:21:30.948 Nvme10n1 : 0.96 267.84 16.74 0.00 0.00 201619.02 16103.13 235679.94 00:21:30.948 [2024-12-09T16:31:57.488Z] =================================================================================================================== 00:21:30.948 [2024-12-09T16:31:57.488Z] Total : 2802.48 175.16 0.00 0.00 208406.24 2871.10 235679.94 00:21:31.207 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1959911 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.144 rmmod nvme_tcp 00:21:32.144 rmmod nvme_fabrics 00:21:32.144 rmmod nvme_keyring 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1959911 ']' 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1959911 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1959911 ']' 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1959911 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.144 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1959911 00:21:32.403 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:32.403 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:32.403 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1959911' 00:21:32.403 killing process with pid 1959911 00:21:32.403 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1959911 00:21:32.403 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1959911 00:21:32.663 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.663 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.663 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.663 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:32.663 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:32.663 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.663 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.663 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.663 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.663 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.663 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.663 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:35.200 00:21:35.200 real 0m8.331s 00:21:35.200 user 0m26.130s 00:21:35.200 sys 0m1.387s 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.200 ************************************ 00:21:35.200 END TEST nvmf_shutdown_tc2 00:21:35.200 ************************************ 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:35.200 ************************************ 00:21:35.200 START TEST nvmf_shutdown_tc3 00:21:35.200 ************************************ 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.200 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:35.201 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:35.201 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:35.201 Found net devices under 0000:af:00.0: cvl_0_0 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:35.201 Found net devices under 0000:af:00.1: cvl_0_1 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:35.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:21:35.201 00:21:35.201 --- 10.0.0.2 ping statistics --- 00:21:35.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.201 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:21:35.201 00:21:35.201 --- 10.0.0.1 ping statistics --- 00:21:35.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.201 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1961431 00:21:35.201 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1961431 00:21:35.202 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:35.202 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1961431 ']' 00:21:35.202 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.202 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.202 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.202 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.202 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.202 [2024-12-09 17:32:01.634709] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:21:35.202 [2024-12-09 17:32:01.634757] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.202 [2024-12-09 17:32:01.713882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.461 [2024-12-09 17:32:01.755183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.461 [2024-12-09 17:32:01.755216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.461 [2024-12-09 17:32:01.755223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.461 [2024-12-09 17:32:01.755229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.461 [2024-12-09 17:32:01.755234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.461 [2024-12-09 17:32:01.756736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.461 [2024-12-09 17:32:01.756846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.461 [2024-12-09 17:32:01.756953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.461 [2024-12-09 17:32:01.756954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.030 [2024-12-09 17:32:02.504293] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.030 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.292 Malloc1 00:21:36.292 [2024-12-09 17:32:02.617052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.292 Malloc2 00:21:36.292 Malloc3 00:21:36.292 Malloc4 00:21:36.292 Malloc5 00:21:36.292 Malloc6 00:21:36.553 Malloc7 00:21:36.553 Malloc8 00:21:36.553 Malloc9 00:21:36.553 Malloc10 00:21:36.553 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1961701 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1961701 /var/tmp/bdevperf.sock 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1961701 ']' 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.553 { 00:21:36.553 "params": { 00:21:36.553 "name": "Nvme$subsystem", 00:21:36.553 "trtype": "$TEST_TRANSPORT", 00:21:36.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.553 "adrfam": "ipv4", 00:21:36.553 "trsvcid": "$NVMF_PORT", 00:21:36.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.553 "hdgst": ${hdgst:-false}, 00:21:36.553 "ddgst": ${ddgst:-false} 00:21:36.553 }, 00:21:36.553 "method": "bdev_nvme_attach_controller" 00:21:36.553 } 00:21:36.553 EOF 00:21:36.553 )") 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.553 { 00:21:36.553 "params": { 00:21:36.553 "name": "Nvme$subsystem", 00:21:36.553 "trtype": "$TEST_TRANSPORT", 00:21:36.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.553 "adrfam": "ipv4", 00:21:36.553 "trsvcid": "$NVMF_PORT", 00:21:36.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.553 "hdgst": ${hdgst:-false}, 00:21:36.553 "ddgst": ${ddgst:-false} 00:21:36.553 }, 00:21:36.553 "method": "bdev_nvme_attach_controller" 00:21:36.553 } 00:21:36.553 EOF 00:21:36.553 )") 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.553 { 00:21:36.553 "params": { 00:21:36.553 "name": "Nvme$subsystem", 00:21:36.553 "trtype": "$TEST_TRANSPORT", 00:21:36.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.553 "adrfam": "ipv4", 00:21:36.553 "trsvcid": "$NVMF_PORT", 00:21:36.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.553 "hdgst": ${hdgst:-false}, 00:21:36.553 "ddgst": ${ddgst:-false} 00:21:36.553 }, 00:21:36.553 "method": "bdev_nvme_attach_controller" 00:21:36.553 } 00:21:36.553 EOF 00:21:36.553 )") 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.553 { 00:21:36.553 "params": { 00:21:36.553 "name": "Nvme$subsystem", 00:21:36.553 "trtype": "$TEST_TRANSPORT", 00:21:36.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.553 "adrfam": "ipv4", 00:21:36.553 "trsvcid": "$NVMF_PORT", 00:21:36.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.553 "hdgst": ${hdgst:-false}, 00:21:36.553 "ddgst": ${ddgst:-false} 00:21:36.553 }, 00:21:36.553 "method": "bdev_nvme_attach_controller" 00:21:36.553 } 00:21:36.553 EOF 00:21:36.553 )") 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.553 { 00:21:36.553 "params": { 00:21:36.553 "name": "Nvme$subsystem", 00:21:36.553 "trtype": "$TEST_TRANSPORT", 00:21:36.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.553 "adrfam": "ipv4", 00:21:36.553 "trsvcid": "$NVMF_PORT", 00:21:36.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.553 "hdgst": ${hdgst:-false}, 00:21:36.553 "ddgst": ${ddgst:-false} 00:21:36.553 }, 00:21:36.553 "method": "bdev_nvme_attach_controller" 00:21:36.553 } 00:21:36.553 EOF 00:21:36.553 )") 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.553 { 00:21:36.553 "params": { 00:21:36.553 "name": "Nvme$subsystem", 00:21:36.553 "trtype": "$TEST_TRANSPORT", 00:21:36.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.553 "adrfam": "ipv4", 00:21:36.553 "trsvcid": "$NVMF_PORT", 00:21:36.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.553 "hdgst": ${hdgst:-false}, 00:21:36.553 "ddgst": ${ddgst:-false} 00:21:36.553 }, 00:21:36.553 "method": "bdev_nvme_attach_controller" 00:21:36.553 } 00:21:36.553 EOF 00:21:36.553 )") 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.553 { 00:21:36.553 "params": { 00:21:36.553 "name": "Nvme$subsystem", 00:21:36.553 "trtype": "$TEST_TRANSPORT", 00:21:36.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.553 "adrfam": "ipv4", 00:21:36.553 "trsvcid": "$NVMF_PORT", 00:21:36.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.553 "hdgst": ${hdgst:-false}, 00:21:36.553 "ddgst": ${ddgst:-false} 00:21:36.553 }, 00:21:36.553 "method": "bdev_nvme_attach_controller" 00:21:36.553 } 00:21:36.553 EOF 00:21:36.553 )") 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:36.553 [2024-12-09 17:32:03.086011] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:21:36.553 [2024-12-09 17:32:03.086058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1961701 ] 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.553 { 00:21:36.553 "params": { 00:21:36.553 "name": "Nvme$subsystem", 00:21:36.553 "trtype": "$TEST_TRANSPORT", 00:21:36.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.553 "adrfam": "ipv4", 00:21:36.553 "trsvcid": "$NVMF_PORT", 00:21:36.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.553 "hdgst": ${hdgst:-false}, 00:21:36.553 "ddgst": ${ddgst:-false} 00:21:36.553 }, 00:21:36.553 "method": "bdev_nvme_attach_controller" 00:21:36.553 } 00:21:36.553 EOF 00:21:36.553 )") 00:21:36.553 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:36.813 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.813 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.813 { 00:21:36.813 "params": { 00:21:36.813 "name": "Nvme$subsystem", 00:21:36.813 "trtype": "$TEST_TRANSPORT", 00:21:36.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.813 "adrfam": "ipv4", 00:21:36.813 "trsvcid": "$NVMF_PORT", 00:21:36.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.813 "hdgst": ${hdgst:-false}, 00:21:36.813 "ddgst": ${ddgst:-false} 00:21:36.813 }, 00:21:36.813 "method": "bdev_nvme_attach_controller" 00:21:36.813 } 00:21:36.813 EOF 00:21:36.813 )") 00:21:36.813 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:36.813 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.813 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.813 { 00:21:36.813 "params": { 00:21:36.813 "name": "Nvme$subsystem", 00:21:36.813 "trtype": "$TEST_TRANSPORT", 00:21:36.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.813 "adrfam": "ipv4", 00:21:36.813 "trsvcid": "$NVMF_PORT", 00:21:36.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.813 "hdgst": ${hdgst:-false}, 00:21:36.813 "ddgst": ${ddgst:-false} 00:21:36.813 }, 00:21:36.813 "method": "bdev_nvme_attach_controller" 00:21:36.813 } 00:21:36.813 EOF 00:21:36.813 )") 00:21:36.813 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:36.813 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:36.813 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:36.813 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:36.813 "params": { 00:21:36.813 "name": "Nvme1", 00:21:36.813 "trtype": "tcp", 00:21:36.813 "traddr": "10.0.0.2", 00:21:36.813 "adrfam": "ipv4", 00:21:36.813 "trsvcid": "4420", 00:21:36.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.813 "hdgst": false, 00:21:36.813 "ddgst": false 00:21:36.813 }, 00:21:36.813 "method": "bdev_nvme_attach_controller" 00:21:36.813 },{ 00:21:36.813 "params": { 00:21:36.813 "name": "Nvme2", 00:21:36.813 "trtype": "tcp", 00:21:36.813 "traddr": "10.0.0.2", 00:21:36.813 "adrfam": "ipv4", 00:21:36.813 "trsvcid": "4420", 00:21:36.813 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:36.813 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:36.813 "hdgst": false, 00:21:36.813 "ddgst": false 00:21:36.813 }, 00:21:36.813 "method": "bdev_nvme_attach_controller" 00:21:36.813 },{ 00:21:36.813 "params": { 00:21:36.813 "name": "Nvme3", 00:21:36.813 "trtype": "tcp", 00:21:36.813 "traddr": "10.0.0.2", 00:21:36.813 "adrfam": "ipv4", 00:21:36.813 "trsvcid": "4420", 00:21:36.813 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:36.813 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:36.813 "hdgst": false, 00:21:36.813 "ddgst": false 00:21:36.813 }, 00:21:36.813 "method": "bdev_nvme_attach_controller" 00:21:36.813 },{ 00:21:36.813 "params": { 00:21:36.813 "name": "Nvme4", 00:21:36.813 "trtype": "tcp", 00:21:36.813 "traddr": "10.0.0.2", 00:21:36.813 "adrfam": "ipv4", 00:21:36.813 "trsvcid": "4420", 00:21:36.813 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:36.813 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:36.813 "hdgst": false, 00:21:36.813 "ddgst": false 00:21:36.813 }, 00:21:36.813 "method": "bdev_nvme_attach_controller" 00:21:36.813 },{ 00:21:36.813 "params": { 00:21:36.813 "name": "Nvme5", 00:21:36.813 "trtype": "tcp", 00:21:36.813 "traddr": "10.0.0.2", 00:21:36.813 "adrfam": "ipv4", 00:21:36.813 "trsvcid": "4420", 00:21:36.813 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:36.813 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:36.813 "hdgst": false, 00:21:36.813 "ddgst": false 00:21:36.813 }, 00:21:36.813 "method": "bdev_nvme_attach_controller" 00:21:36.813 },{ 00:21:36.813 "params": { 00:21:36.813 "name": "Nvme6", 00:21:36.813 "trtype": "tcp", 00:21:36.813 "traddr": "10.0.0.2", 00:21:36.813 "adrfam": "ipv4", 00:21:36.813 "trsvcid": "4420", 00:21:36.813 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:36.813 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:36.813 "hdgst": false, 00:21:36.814 "ddgst": false 00:21:36.814 }, 00:21:36.814 "method": "bdev_nvme_attach_controller" 00:21:36.814 },{ 00:21:36.814 "params": { 00:21:36.814 "name": "Nvme7", 00:21:36.814 "trtype": "tcp", 00:21:36.814 "traddr": "10.0.0.2", 00:21:36.814 "adrfam": "ipv4", 00:21:36.814 "trsvcid": "4420", 00:21:36.814 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:36.814 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:36.814 "hdgst": false, 00:21:36.814 "ddgst": false 00:21:36.814 }, 00:21:36.814 "method": "bdev_nvme_attach_controller" 00:21:36.814 },{ 00:21:36.814 "params": { 00:21:36.814 "name": "Nvme8", 00:21:36.814 "trtype": "tcp", 00:21:36.814 "traddr": "10.0.0.2", 00:21:36.814 "adrfam": "ipv4", 00:21:36.814 "trsvcid": "4420", 00:21:36.814 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:36.814 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:36.814 "hdgst": false, 00:21:36.814 "ddgst": false 00:21:36.814 }, 00:21:36.814 "method": "bdev_nvme_attach_controller" 00:21:36.814 },{ 00:21:36.814 "params": { 00:21:36.814 "name": "Nvme9", 00:21:36.814 "trtype": "tcp", 00:21:36.814 "traddr": "10.0.0.2", 00:21:36.814 "adrfam": "ipv4", 00:21:36.814 "trsvcid": "4420", 00:21:36.814 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:36.814 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:36.814 "hdgst": false, 00:21:36.814 "ddgst": false 00:21:36.814 }, 00:21:36.814 "method": "bdev_nvme_attach_controller" 00:21:36.814 },{ 00:21:36.814 "params": { 00:21:36.814 "name": "Nvme10", 00:21:36.814 "trtype": "tcp", 00:21:36.814 "traddr": "10.0.0.2", 00:21:36.814 "adrfam": "ipv4", 00:21:36.814 "trsvcid": "4420", 00:21:36.814 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:36.814 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:36.814 "hdgst": false, 00:21:36.814 "ddgst": false 00:21:36.814 }, 00:21:36.814 "method": "bdev_nvme_attach_controller" 00:21:36.814 }' 00:21:36.814 [2024-12-09 17:32:03.162260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.814 [2024-12-09 17:32:03.201547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.718 Running I/O for 10 seconds... 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:38.718 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:38.978 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:38.978 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:38.978 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:38.978 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:38.978 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.978 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.978 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.978 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:38.978 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:38.978 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:39.237 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:39.237 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:39.237 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1961431 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1961431 ']' 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1961431 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.238 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1961431 00:21:39.527 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:39.527 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:39.527 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1961431' 00:21:39.527 killing process with pid 1961431 00:21:39.527 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1961431 00:21:39.527 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1961431 00:21:39.527 [2024-12-09 17:32:05.797220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.527 [2024-12-09 17:32:05.797466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.797665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b30c0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.528 [2024-12-09 17:32:05.799942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.799948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.799954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.799960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.799967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.799973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.799980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.799987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.799993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b35b0 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.800445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.529 [2024-12-09 17:32:05.800476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.529 [2024-12-09 17:32:05.800487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.529 [2024-12-09 17:32:05.800494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.529 [2024-12-09 17:32:05.800502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.529 [2024-12-09 17:32:05.800509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.529 [2024-12-09 17:32:05.800516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.529 [2024-12-09 17:32:05.800523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.529 [2024-12-09 17:32:05.800529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6d110 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.800597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.529 [2024-12-09 17:32:05.800606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.529 [2024-12-09 17:32:05.800614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.529 [2024-12-09 17:32:05.800621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.529 [2024-12-09 17:32:05.800629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.529 [2024-12-09 17:32:05.800636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.529 [2024-12-09 17:32:05.800643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.529 [2024-12-09 17:32:05.800650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.529 [2024-12-09 17:32:05.800656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1913410 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.529 [2024-12-09 17:32:05.801780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.530 [2024-12-09 17:32:05.801787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.530 [2024-12-09 17:32:05.801793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.530 [2024-12-09 17:32:05.801799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.530 [2024-12-09 17:32:05.801805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3a80 is same with the state(6) to be set 00:21:39.530 [2024-12-09 17:32:05.803810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.803832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.803847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.803855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.803864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.803871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.803880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.803886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.803894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.803901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.803909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.803917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.803925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.803931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.803939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.803947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.803955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.803962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.803970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.803977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.803989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.803995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.530 [2024-12-09 17:32:05.804402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.530 [2024-12-09 17:32:05.804409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with t[2024-12-09 17:32:05.804455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:39.531 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 17:32:05.804473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 he state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with t[2024-12-09 17:32:05.804485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1he state(6) to be set 00:21:39.531 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with t[2024-12-09 17:32:05.804495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:39.531 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1[2024-12-09 17:32:05.804525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 he state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 17:32:05.804537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 he state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 17:32:05.804611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 he state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b3f70 is same with the state(6) to be set 00:21:39.531 [2024-12-09 17:32:05.804657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.531 [2024-12-09 17:32:05.804812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.531 [2024-12-09 17:32:05.804822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.532 [2024-12-09 17:32:05.804830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.532 [2024-12-09 17:32:05.804836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.532 [2024-12-09 17:32:05.804844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.532 [2024-12-09 17:32:05.804851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.532 [2024-12-09 17:32:05.804858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57e30 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.805676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4440 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:39.532 [2024-12-09 17:32:05.806790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6d110 (9): Bad file descriptor 00:21:39.532 [2024-12-09 17:32:05.806816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.532 [2024-12-09 17:32:05.806899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.806984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.806991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with t[2024-12-09 17:32:05.806991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:39.533 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.806999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:1[2024-12-09 17:32:05.807013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 he state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with t[2024-12-09 17:32:05.807033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128he state(6) to be set 00:21:39.533 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with t[2024-12-09 17:32:05.807044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:39.533 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128[2024-12-09 17:32:05.807090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 he state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 17:32:05.807099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 he state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with t[2024-12-09 17:32:05.807172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:12he state(6) to be set 00:21:39.533 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:12[2024-12-09 17:32:05.807210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 he state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 17:32:05.807219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 he state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4910 is same with the state(6) to be set 00:21:39.533 [2024-12-09 17:32:05.807271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.533 [2024-12-09 17:32:05.807319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.533 [2024-12-09 17:32:05.807327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.534 [2024-12-09 17:32:05.807912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.534 [2024-12-09 17:32:05.807918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.535 [2024-12-09 17:32:05.807926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.535 [2024-12-09 17:32:05.807932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.535 [2024-12-09 17:32:05.807940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.535 [2024-12-09 17:32:05.807946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.535 [2024-12-09 17:32:05.807954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.535 [2024-12-09 17:32:05.807961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.535 [2024-12-09 17:32:05.807969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.535 [2024-12-09 17:32:05.807975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.535 [2024-12-09 17:32:05.807983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.535 [2024-12-09 17:32:05.807989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.535 [2024-12-09 17:32:05.807997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.535 [2024-12-09 17:32:05.808005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.535 [2024-12-09 17:32:05.808279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.808685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4de0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.809540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.809556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.809562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.809568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.809574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.809580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.809586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.809592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.809605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.809611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.535 [2024-12-09 17:32:05.809616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.809965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.810017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.810071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.810123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b52d0 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.810672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:39.536 [2024-12-09 17:32:05.810728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fec0 (9): Bad file descriptor 00:21:39.536 [2024-12-09 17:32:05.810926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.536 [2024-12-09 17:32:05.810939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6d110 with addr=10.0.0.2, port=4420 00:21:39.536 [2024-12-09 17:32:05.810947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6d110 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.810979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.536 [2024-12-09 17:32:05.810988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.536 [2024-12-09 17:32:05.810996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.536 [2024-12-09 17:32:05.811003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.536 [2024-12-09 17:32:05.811010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.536 [2024-12-09 17:32:05.811020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.536 [2024-12-09 17:32:05.811027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.536 [2024-12-09 17:32:05.811037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.536 [2024-12-09 17:32:05.811043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6de90 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.811072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.536 [2024-12-09 17:32:05.811081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.536 [2024-12-09 17:32:05.811088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.536 [2024-12-09 17:32:05.811139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.536 [2024-12-09 17:32:05.811204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.536 [2024-12-09 17:32:05.811257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.536 [2024-12-09 17:32:05.811313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.536 [2024-12-09 17:32:05.811364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.536 [2024-12-09 17:32:05.811417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80d90 is same with the state(6) to be set 00:21:39.536 [2024-12-09 17:32:05.811487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.811558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.811616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.811671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.811726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.811778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.811831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.811884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.811941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1908890 is same with the state(6) to be set 00:21:39.537 [2024-12-09 17:32:05.812013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.812048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.812103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.812154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.812218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.812269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.812329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.812380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.812438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828610 is same with the state(6) to be set 00:21:39.537 [2024-12-09 17:32:05.812510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.812547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.812627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.812683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.812737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.812795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.812851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.812905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.812958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f790 is same with the state(6) to be set 00:21:39.537 [2024-12-09 17:32:05.813028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.813067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.813121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.813179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.813241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.813293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.813348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.813400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.813455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1907450 is same with the state(6) to be set 00:21:39.537 [2024-12-09 17:32:05.813524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.813567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.813621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.813683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.813739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.813790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.813845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.537 [2024-12-09 17:32:05.813896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.813950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f0e0 is same with the state(6) to be set 00:21:39.537 [2024-12-09 17:32:05.814018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1913410 (9): Bad file descriptor 00:21:39.537 [2024-12-09 17:32:05.814110] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.537 [2024-12-09 17:32:05.814162] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.537 [2024-12-09 17:32:05.814474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.814490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.814502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.814509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.814518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.814527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.814537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.814544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.814594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.814645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.814698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.814750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.814803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.814857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.814915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.814967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.815022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.815072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.815128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.815185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.815222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.815254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.815296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.815329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.815365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.815397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.815433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.815468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.815503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.815539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.815575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.537 [2024-12-09 17:32:05.815611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.537 [2024-12-09 17:32:05.815646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.815678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.815714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.815748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.815784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.815821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.815857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.815892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.815928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.815964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.816956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.816991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.817022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.817057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.817089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.817127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.817160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.817200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.817232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.829984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.829994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.830004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.830014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.830024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.830035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.830044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.830055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.538 [2024-12-09 17:32:05.830064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.538 [2024-12-09 17:32:05.830076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.830086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.830100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.830109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.830121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.830130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.830142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.830151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.830162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.830175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.830186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.830196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.830206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.830216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.830371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6d110 (9): Bad file descriptor 00:21:39.539 [2024-12-09 17:32:05.830418] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:39.539 [2024-12-09 17:32:05.830433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6de90 (9): Bad file descriptor 00:21:39.539 [2024-12-09 17:32:05.830449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d80d90 (9): Bad file descriptor 00:21:39.539 [2024-12-09 17:32:05.830476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1908890 (9): Bad file descriptor 00:21:39.539 [2024-12-09 17:32:05.830496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1828610 (9): Bad file descriptor 00:21:39.539 [2024-12-09 17:32:05.830516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190f790 (9): Bad file descriptor 00:21:39.539 [2024-12-09 17:32:05.830534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1907450 (9): Bad file descriptor 00:21:39.539 [2024-12-09 17:32:05.830553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3f0e0 (9): Bad file descriptor 00:21:39.539 [2024-12-09 17:32:05.830679] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.539 [2024-12-09 17:32:05.832719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.539 [2024-12-09 17:32:05.832755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fec0 with addr=10.0.0.2, port=4420 00:21:39.539 [2024-12-09 17:32:05.832768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fec0 is same with the state(6) to be set 00:21:39.539 [2024-12-09 17:32:05.832780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:39.539 [2024-12-09 17:32:05.832791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:39.539 [2024-12-09 17:32:05.832811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:39.539 [2024-12-09 17:32:05.832823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:39.539 [2024-12-09 17:32:05.832878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.832890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.832906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.832916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.832928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.832937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.832948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.832958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.832969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.832978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.832990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.539 [2024-12-09 17:32:05.833423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.539 [2024-12-09 17:32:05.833433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.833983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.833991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.834003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.834011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.834023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.834032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.834043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.834052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.834063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.834072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.834088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.834097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.834107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.834117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.834127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.834137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.834149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.834159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.540 [2024-12-09 17:32:05.834174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.540 [2024-12-09 17:32:05.834184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.834198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.834207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.834219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.834229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.834239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b173a0 is same with the state(6) to be set 00:21:39.541 [2024-12-09 17:32:05.835606] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.541 [2024-12-09 17:32:05.835668] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.541 [2024-12-09 17:32:05.835812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:39.541 [2024-12-09 17:32:05.835832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:39.541 [2024-12-09 17:32:05.835867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fec0 (9): Bad file descriptor 00:21:39.541 [2024-12-09 17:32:05.835999] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.541 [2024-12-09 17:32:05.836475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.541 [2024-12-09 17:32:05.836502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1913410 with addr=10.0.0.2, port=4420 00:21:39.541 [2024-12-09 17:32:05.836517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1913410 is same with the state(6) to be set 00:21:39.541 [2024-12-09 17:32:05.836614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.541 [2024-12-09 17:32:05.836631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6de90 with addr=10.0.0.2, port=4420 00:21:39.541 [2024-12-09 17:32:05.836643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6de90 is same with the state(6) to be set 00:21:39.541 [2024-12-09 17:32:05.836655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:39.541 [2024-12-09 17:32:05.836665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:39.541 [2024-12-09 17:32:05.836678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:39.541 [2024-12-09 17:32:05.836690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:39.541 [2024-12-09 17:32:05.837128] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.541 [2024-12-09 17:32:05.837178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1913410 (9): Bad file descriptor 00:21:39.541 [2024-12-09 17:32:05.837198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6de90 (9): Bad file descriptor 00:21:39.541 [2024-12-09 17:32:05.837277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:39.541 [2024-12-09 17:32:05.837292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:39.541 [2024-12-09 17:32:05.837304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:39.541 [2024-12-09 17:32:05.837314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:39.541 [2024-12-09 17:32:05.837326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:39.541 [2024-12-09 17:32:05.837342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:39.541 [2024-12-09 17:32:05.837353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:39.541 [2024-12-09 17:32:05.837364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:39.541 [2024-12-09 17:32:05.840549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.840976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.840987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.841000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.841010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.841025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.841035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.841049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.841060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.841074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.841085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.841099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.841110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.841124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.841135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.841149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.841159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.841183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.841195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.541 [2024-12-09 17:32:05.841210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.541 [2024-12-09 17:32:05.841221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.841977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.841989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.842003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.842013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.842027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.842038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.842051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.842061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.842075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.842086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.842101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.842115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.842129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.842139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.842154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.842171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.842183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b183c0 is same with the state(6) to be set 00:21:39.542 [2024-12-09 17:32:05.843801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.843819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.542 [2024-12-09 17:32:05.843836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.542 [2024-12-09 17:32:05.843848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.843861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.843873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.843889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.843900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.843914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.843926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.843939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.843951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.843965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.843976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.843989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.543 [2024-12-09 17:32:05.844878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.543 [2024-12-09 17:32:05.844891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.844903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.844918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.844929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.844943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.844955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.844968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.844980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.844994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.845448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.845459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b19430 is same with the state(6) to be set 00:21:39.544 [2024-12-09 17:32:05.846908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.846922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.846933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.846940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.846950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.846960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.846969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.846977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.846988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.846995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.544 [2024-12-09 17:32:05.847208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.544 [2024-12-09 17:32:05.847216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.545 [2024-12-09 17:32:05.847805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-12-09 17:32:05.847812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.847821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.847829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.847839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.847846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.847855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.847862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.847870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.847880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.847889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.847897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.847906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.847914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.847923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.847931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.847941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.847948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.847957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.847964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.847973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.847981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.847990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18650 is same with the state(6) to be set 00:21:39.546 [2024-12-09 17:32:05.849049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.546 [2024-12-09 17:32:05.849535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-12-09 17:32:05.849542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.849832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.849839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.859438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.859450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d198f0 is same with the state(6) to be set 00:21:39.547 [2024-12-09 17:32:05.860875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.860891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.860907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.860918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.860931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.860941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.860953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.860963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.547 [2024-12-09 17:32:05.860979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-12-09 17:32:05.860989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.548 [2024-12-09 17:32:05.861890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.548 [2024-12-09 17:32:05.861902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.861912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.861926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.861937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.861948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.861959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.861971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.861981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.861993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.862339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.862350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1ac00 is same with the state(6) to be set 00:21:39.549 [2024-12-09 17:32:05.863763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.863780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.863797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.863808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.863824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.863834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.863847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.863857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.863869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.863879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.863891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.863901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.863913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.863924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.863936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.863945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.863957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.863967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.863978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.863989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.864001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.864011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.864023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.864033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.864045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.864055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.864067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.864077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.864090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.864101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.864114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.864124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.549 [2024-12-09 17:32:05.864136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.549 [2024-12-09 17:32:05.864146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.864983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.864993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.865004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.865015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.865027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.550 [2024-12-09 17:32:05.865037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.550 [2024-12-09 17:32:05.865048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.551 [2024-12-09 17:32:05.865058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.551 [2024-12-09 17:32:05.865069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.551 [2024-12-09 17:32:05.865079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.551 [2024-12-09 17:32:05.865091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.551 [2024-12-09 17:32:05.865101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.551 [2024-12-09 17:32:05.865112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.551 [2024-12-09 17:32:05.865124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.551 [2024-12-09 17:32:05.865136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.551 [2024-12-09 17:32:05.865146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.551 [2024-12-09 17:32:05.865158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.551 [2024-12-09 17:32:05.865174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.551 [2024-12-09 17:32:05.865187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.551 [2024-12-09 17:32:05.865197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.551 [2024-12-09 17:32:05.865207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2c61d70 is same with the state(6) to be set 00:21:39.551 [2024-12-09 17:32:05.866592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:39.551 [2024-12-09 17:32:05.866633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:39.551 [2024-12-09 17:32:05.866650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:39.551 [2024-12-09 17:32:05.866664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:39.551 [2024-12-09 17:32:05.866757] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:39.551 [2024-12-09 17:32:05.866775] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:39.551 [2024-12-09 17:32:05.866790] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:39.551 [2024-12-09 17:32:05.866806] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:39.551 [2024-12-09 17:32:05.867137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:39.551 [2024-12-09 17:32:05.867159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:39.551 task offset: 24576 on job bdev=Nvme10n1 fails 00:21:39.551 00:21:39.551 Latency(us) 00:21:39.551 [2024-12-09T16:32:06.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.551 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.551 Job: Nvme1n1 ended in about 0.95 seconds with error 00:21:39.551 Verification LBA range: start 0x0 length 0x400 00:21:39.551 Nvme1n1 : 0.95 203.12 12.69 67.71 0.00 233963.03 17850.76 236678.58 00:21:39.551 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.551 Job: Nvme2n1 ended in about 0.95 seconds with error 00:21:39.551 Verification LBA range: start 0x0 length 0x400 00:21:39.551 Nvme2n1 : 0.95 205.61 12.85 67.14 0.00 228421.05 16976.94 217704.35 00:21:39.551 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.551 Job: Nvme3n1 ended in about 0.96 seconds with error 00:21:39.551 Verification LBA range: start 0x0 length 0x400 00:21:39.551 Nvme3n1 : 0.96 200.72 12.55 66.91 0.00 228926.66 16852.11 239674.51 00:21:39.551 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.551 Job: Nvme4n1 ended in about 0.92 seconds with error 00:21:39.551 Verification LBA range: start 0x0 length 0x400 00:21:39.551 Nvme4n1 : 0.92 282.64 17.66 69.57 0.00 170529.60 3869.74 206719.27 00:21:39.551 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.551 Job: Nvme5n1 ended in about 0.96 seconds with error 00:21:39.551 Verification LBA range: start 0x0 length 0x400 00:21:39.551 Nvme5n1 : 0.96 205.44 12.84 66.74 0.00 217525.47 17476.27 203723.34 00:21:39.551 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.551 Job: Nvme6n1 ended in about 0.97 seconds with error 00:21:39.551 Verification LBA range: start 0x0 length 0x400 00:21:39.551 Nvme6n1 : 0.97 197.84 12.37 65.95 0.00 220784.40 15853.47 209715.20 00:21:39.551 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.551 Job: Nvme7n1 ended in about 0.97 seconds with error 00:21:39.551 Verification LBA range: start 0x0 length 0x400 00:21:39.551 Nvme7n1 : 0.97 197.26 12.33 65.75 0.00 217631.94 13918.60 220700.28 00:21:39.551 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.551 Job: Nvme8n1 ended in about 0.98 seconds with error 00:21:39.551 Verification LBA range: start 0x0 length 0x400 00:21:39.551 Nvme8n1 : 0.98 201.81 12.61 65.56 0.00 210397.70 16976.94 207717.91 00:21:39.551 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.551 Job: Nvme9n1 ended in about 0.94 seconds with error 00:21:39.551 Verification LBA range: start 0x0 length 0x400 00:21:39.551 Nvme9n1 : 0.94 203.77 12.74 67.92 0.00 202113.22 20971.52 243669.09 00:21:39.551 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.551 Job: Nvme10n1 ended in about 0.92 seconds with error 00:21:39.551 Verification LBA range: start 0x0 length 0x400 00:21:39.551 Nvme10n1 : 0.92 209.40 13.09 69.80 0.00 192036.39 5804.62 225693.50 00:21:39.551 [2024-12-09T16:32:06.091Z] =================================================================================================================== 00:21:39.551 [2024-12-09T16:32:06.091Z] Total : 2107.60 131.73 673.05 0.00 211190.63 3869.74 243669.09 00:21:39.551 [2024-12-09 17:32:05.905806] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:39.551 [2024-12-09 17:32:05.905853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:39.551 [2024-12-09 17:32:05.905871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:39.551 [2024-12-09 17:32:05.906207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.551 [2024-12-09 17:32:05.906228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6d110 with addr=10.0.0.2, port=4420 00:21:39.551 [2024-12-09 17:32:05.906240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6d110 is same with the state(6) to be set 00:21:39.551 [2024-12-09 17:32:05.906462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.551 [2024-12-09 17:32:05.906476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1907450 with addr=10.0.0.2, port=4420 00:21:39.551 [2024-12-09 17:32:05.906484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1907450 is same with the state(6) to be set 00:21:39.551 [2024-12-09 17:32:05.906704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.551 [2024-12-09 17:32:05.906719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190f790 with addr=10.0.0.2, port=4420 00:21:39.551 [2024-12-09 17:32:05.906727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f790 is same with the state(6) to be set 00:21:39.551 [2024-12-09 17:32:05.906904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.551 [2024-12-09 17:32:05.906917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3f0e0 with addr=10.0.0.2, port=4420 00:21:39.551 [2024-12-09 17:32:05.906924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f0e0 is same with the state(6) to be set 00:21:39.551 [2024-12-09 17:32:05.908268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:39.551 [2024-12-09 17:32:05.908575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.551 [2024-12-09 17:32:05.908594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1908890 with addr=10.0.0.2, port=4420 00:21:39.551 [2024-12-09 17:32:05.908603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1908890 is same with the state(6) to be set 00:21:39.551 [2024-12-09 17:32:05.908756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.551 [2024-12-09 17:32:05.908768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1828610 with addr=10.0.0.2, port=4420 00:21:39.551 [2024-12-09 17:32:05.908775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828610 is same with the state(6) to be set 00:21:39.551 [2024-12-09 17:32:05.908981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.551 [2024-12-09 17:32:05.908994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d80d90 with addr=10.0.0.2, port=4420 00:21:39.551 [2024-12-09 17:32:05.909001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80d90 is same with the state(6) to be set 00:21:39.551 [2024-12-09 17:32:05.909164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.551 [2024-12-09 17:32:05.909200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fec0 with addr=10.0.0.2, port=4420 00:21:39.551 [2024-12-09 17:32:05.909207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fec0 is same with the state(6) to be set 00:21:39.551 [2024-12-09 17:32:05.909221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6d110 (9): Bad file descriptor 00:21:39.551 [2024-12-09 17:32:05.909234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1907450 (9): Bad file descriptor 00:21:39.551 [2024-12-09 17:32:05.909243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190f790 (9): Bad file descriptor 00:21:39.551 [2024-12-09 17:32:05.909252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3f0e0 (9): Bad file descriptor 00:21:39.551 [2024-12-09 17:32:05.909281] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:39.551 [2024-12-09 17:32:05.909298] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:39.551 [2024-12-09 17:32:05.909308] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:39.551 [2024-12-09 17:32:05.909320] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:39.552 [2024-12-09 17:32:05.909330] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:39.552 [2024-12-09 17:32:05.909401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:39.552 [2024-12-09 17:32:05.909672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.552 [2024-12-09 17:32:05.909687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6de90 with addr=10.0.0.2, port=4420 00:21:39.552 [2024-12-09 17:32:05.909695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6de90 is same with the state(6) to be set 00:21:39.552 [2024-12-09 17:32:05.909704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1908890 (9): Bad file descriptor 00:21:39.552 [2024-12-09 17:32:05.909714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1828610 (9): Bad file descriptor 00:21:39.552 [2024-12-09 17:32:05.909723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d80d90 (9): Bad file descriptor 00:21:39.552 [2024-12-09 17:32:05.909732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fec0 (9): Bad file descriptor 00:21:39.552 [2024-12-09 17:32:05.909740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:39.552 [2024-12-09 17:32:05.909748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:39.552 [2024-12-09 17:32:05.909756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:39.552 [2024-12-09 17:32:05.909764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:39.552 [2024-12-09 17:32:05.909773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:39.552 [2024-12-09 17:32:05.909779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:39.552 [2024-12-09 17:32:05.909785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:39.552 [2024-12-09 17:32:05.909791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:39.552 [2024-12-09 17:32:05.909801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:39.552 [2024-12-09 17:32:05.909807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:39.552 [2024-12-09 17:32:05.909815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:39.552 [2024-12-09 17:32:05.909821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:39.552 [2024-12-09 17:32:05.909828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:39.552 [2024-12-09 17:32:05.909834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:39.552 [2024-12-09 17:32:05.909841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:39.552 [2024-12-09 17:32:05.909847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:39.552 [2024-12-09 17:32:05.910153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.552 [2024-12-09 17:32:05.910171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1913410 with addr=10.0.0.2, port=4420 00:21:39.552 [2024-12-09 17:32:05.910180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1913410 is same with the state(6) to be set 00:21:39.552 [2024-12-09 17:32:05.910189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6de90 (9): Bad file descriptor 00:21:39.552 [2024-12-09 17:32:05.910197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:39.552 [2024-12-09 17:32:05.910203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:39.552 [2024-12-09 17:32:05.910210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:39.552 [2024-12-09 17:32:05.910217] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:39.552 [2024-12-09 17:32:05.910224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:39.552 [2024-12-09 17:32:05.910231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:39.552 [2024-12-09 17:32:05.910237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:39.552 [2024-12-09 17:32:05.910244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:39.552 [2024-12-09 17:32:05.910251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:39.552 [2024-12-09 17:32:05.910256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:39.552 [2024-12-09 17:32:05.910263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:39.552 [2024-12-09 17:32:05.910268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:39.552 [2024-12-09 17:32:05.910275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:39.552 [2024-12-09 17:32:05.910284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:39.552 [2024-12-09 17:32:05.910290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:39.552 [2024-12-09 17:32:05.910297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:39.552 [2024-12-09 17:32:05.910943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1913410 (9): Bad file descriptor 00:21:39.552 [2024-12-09 17:32:05.910963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:39.552 [2024-12-09 17:32:05.910969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:39.552 [2024-12-09 17:32:05.910976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:39.552 [2024-12-09 17:32:05.910984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:39.552 [2024-12-09 17:32:05.911011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:39.552 [2024-12-09 17:32:05.911019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:39.552 [2024-12-09 17:32:05.911026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:39.552 [2024-12-09 17:32:05.911032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:39.894 17:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1961701 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1961701 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1961701 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:40.841 rmmod nvme_tcp 00:21:40.841 rmmod nvme_fabrics 00:21:40.841 rmmod nvme_keyring 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1961431 ']' 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1961431 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1961431 ']' 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1961431 00:21:40.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1961431) - No such process 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1961431 is not found' 00:21:40.841 Process with pid 1961431 is not found 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.841 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.377 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.377 00:21:43.377 real 0m8.153s 00:21:43.377 user 0m20.689s 00:21:43.377 sys 0m1.384s 00:21:43.377 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.377 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.377 ************************************ 00:21:43.377 END TEST nvmf_shutdown_tc3 00:21:43.377 ************************************ 00:21:43.377 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:43.377 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:43.377 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:43.377 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:43.377 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.377 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:43.377 ************************************ 00:21:43.377 START TEST nvmf_shutdown_tc4 00:21:43.377 ************************************ 00:21:43.377 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:43.377 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:43.378 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:43.378 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:43.378 Found net devices under 0000:af:00.0: cvl_0_0 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:43.378 Found net devices under 0000:af:00.1: cvl_0_1 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:43.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:21:43.378 00:21:43.378 --- 10.0.0.2 ping statistics --- 00:21:43.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.378 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:21:43.378 00:21:43.378 --- 10.0.0.1 ping statistics --- 00:21:43.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.378 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1962943 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1962943 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1962943 ']' 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.378 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:43.378 [2024-12-09 17:32:09.825008] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:21:43.378 [2024-12-09 17:32:09.825060] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.378 [2024-12-09 17:32:09.903281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:43.637 [2024-12-09 17:32:09.945534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.637 [2024-12-09 17:32:09.945567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.637 [2024-12-09 17:32:09.945574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.637 [2024-12-09 17:32:09.945579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.637 [2024-12-09 17:32:09.945584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.637 [2024-12-09 17:32:09.947070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.637 [2024-12-09 17:32:09.947198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.637 [2024-12-09 17:32:09.947091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.637 [2024-12-09 17:32:09.947199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:44.204 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.204 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:44.204 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:44.204 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.204 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:44.204 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:44.205 [2024-12-09 17:32:10.712852] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.205 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.463 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:44.463 Malloc1 00:21:44.463 [2024-12-09 17:32:10.830082] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.463 Malloc2 00:21:44.463 Malloc3 00:21:44.463 Malloc4 00:21:44.463 Malloc5 00:21:44.722 Malloc6 00:21:44.722 Malloc7 00:21:44.722 Malloc8 00:21:44.722 Malloc9 00:21:44.722 Malloc10 00:21:44.722 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.722 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:44.722 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.722 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:44.722 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1963218 00:21:44.722 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:44.722 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:44.981 [2024-12-09 17:32:11.336074] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:50.259 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:50.259 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1962943 00:21:50.259 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1962943 ']' 00:21:50.259 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1962943 00:21:50.259 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:50.259 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.259 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1962943 00:21:50.259 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:50.259 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:50.259 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1962943' 00:21:50.259 killing process with pid 1962943 00:21:50.259 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1962943 00:21:50.259 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1962943 00:21:50.259 Write completed with error (sct=0, sc=8) 00:21:50.259 starting I/O failed: -6 00:21:50.259 Write completed with error (sct=0, sc=8) 00:21:50.259 Write completed with error (sct=0, sc=8) 00:21:50.259 Write completed with error (sct=0, sc=8) 00:21:50.259 Write completed with error (sct=0, sc=8) 00:21:50.259 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 [2024-12-09 17:32:16.330598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:50.260 NVMe io qpair process completion error 00:21:50.260 [2024-12-09 17:32:16.330789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e25790 is same with the state(6) to be set 00:21:50.260 [2024-12-09 17:32:16.330825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e25790 is same with the state(6) to be set 00:21:50.260 [2024-12-09 17:32:16.330834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e25790 is same with the state(6) to be set 00:21:50.260 [2024-12-09 17:32:16.330840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e25790 is same with the state(6) to be set 00:21:50.260 [2024-12-09 17:32:16.330847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e25790 is same with the state(6) to be set 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 [2024-12-09 17:32:16.336153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26fa0 is same with Write completed with error (sct=0, sc=8) 00:21:50.260 the state(6) to be set 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 [2024-12-09 17:32:16.336199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26fa0 is same with the state(6) to be set 00:21:50.260 [2024-12-09 17:32:16.336208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26fa0 is same with the state(6) to be set 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 [2024-12-09 17:32:16.336215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26fa0 is same with the state(6) to be set 00:21:50.260 [2024-12-09 17:32:16.336222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26fa0 is same with the state(6) to be set 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 [2024-12-09 17:32:16.336230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26fa0 is same with the state(6) to be set 00:21:50.260 [2024-12-09 17:32:16.336237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26fa0 is same with the state(6) to be set 00:21:50.260 [2024-12-09 17:32:16.336244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26fa0 is same with Write completed with error (sct=0, sc=8) 00:21:50.260 the state(6) to be set 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 [2024-12-09 17:32:16.336695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:50.260 [2024-12-09 17:32:16.336798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27470 is same with the state(6) to be set 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 [2024-12-09 17:32:16.336827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27470 is same with the state(6) to be set 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 [2024-12-09 17:32:16.336836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27470 is same with the state(6) to be set 00:21:50.260 [2024-12-09 17:32:16.336844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27470 is same with the state(6) to be set 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 [2024-12-09 17:32:16.336850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27470 is same with the state(6) to be set 00:21:50.260 starting I/O failed: -6 00:21:50.260 [2024-12-09 17:32:16.336857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27470 is same with the state(6) to be set 00:21:50.260 [2024-12-09 17:32:16.336864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27470 is same with the state(6) to be set 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 starting I/O failed: -6 00:21:50.260 [2024-12-09 17:32:16.337546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27940 is same with the state(6) to be set 00:21:50.260 Write completed with error (sct=0, sc=8) 00:21:50.260 [2024-12-09 17:32:16.337577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27940 is same with Write completed with error (sct=0, sc=8) 00:21:50.260 the state(6) to be set 00:21:50.260 [2024-12-09 17:32:16.337587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27940 is same with the state(6) to be set 00:21:50.261 [2024-12-09 17:32:16.337595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27940 is same with the state(6) to be set 00:21:50.261 [2024-12-09 17:32:16.337603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ[2024-12-09 17:32:16.337606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27940 is same with transport error -6 (No such device or address) on qpair id 2 00:21:50.261 the state(6) to be set 00:21:50.261 [2024-12-09 17:32:16.337615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27940 is same with the state(6) to be set 00:21:50.261 [2024-12-09 17:32:16.337621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27940 is same with the state(6) to be set 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 [2024-12-09 17:32:16.338594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 [2024-12-09 17:32:16.338878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc5fe0 is same with the state(6) to be set 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 [2024-12-09 17:32:16.338899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc5fe0 is same with the state(6) to be set 00:21:50.261 [2024-12-09 17:32:16.338906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc5fe0 is same with Write completed with error (sct=0, sc=8) 00:21:50.261 the state(6) to be set 00:21:50.261 starting I/O failed: -6 00:21:50.261 [2024-12-09 17:32:16.338915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc5fe0 is same with the state(6) to be set 00:21:50.261 [2024-12-09 17:32:16.338922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc5fe0 is same with the state(6) to be set 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 [2024-12-09 17:32:16.338928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc5fe0 is same with the state(6) to be set 00:21:50.261 starting I/O failed: -6 00:21:50.261 [2024-12-09 17:32:16.338934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc5fe0 is same with the state(6) to be set 00:21:50.261 [2024-12-09 17:32:16.338941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc5fe0 is same with the state(6) to be set 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 [2024-12-09 17:32:16.339219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc64d0 is same with the state(6) to be set 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 [2024-12-09 17:32:16.339234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc64d0 is same with the state(6) to be set 00:21:50.261 [2024-12-09 17:32:16.339241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc64d0 is same with the state(6) to be set 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 [2024-12-09 17:32:16.339248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc64d0 is same with the state(6) to be set 00:21:50.261 starting I/O failed: -6 00:21:50.261 [2024-12-09 17:32:16.339255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc64d0 is same with the state(6) to be set 00:21:50.261 [2024-12-09 17:32:16.339261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc64d0 is same with the state(6) to be set 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 [2024-12-09 17:32:16.339268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc64d0 is same with the state(6) to be set 00:21:50.261 starting I/O failed: -6 00:21:50.261 [2024-12-09 17:32:16.339276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc64d0 is same with the state(6) to be set 00:21:50.261 [2024-12-09 17:32:16.339283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc64d0 is same with the state(6) to be set 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 [2024-12-09 17:32:16.339293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc64d0 is same with starting I/O failed: -6 00:21:50.261 the state(6) to be set 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.261 Write completed with error (sct=0, sc=8) 00:21:50.261 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 [2024-12-09 17:32:16.339682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc69c0 is same with the state(6) to be set 00:21:50.262 starting I/O failed: -6 00:21:50.262 [2024-12-09 17:32:16.339696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc69c0 is same with the state(6) to be set 00:21:50.262 [2024-12-09 17:32:16.339703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc69c0 is same with the state(6) to be set 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 [2024-12-09 17:32:16.339709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc69c0 is same with starting I/O failed: -6 00:21:50.262 the state(6) to be set 00:21:50.262 [2024-12-09 17:32:16.339717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc69c0 is same with the state(6) to be set 00:21:50.262 [2024-12-09 17:32:16.339723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc69c0 is same with the state(6) to be set 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 [2024-12-09 17:32:16.339729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc69c0 is same with the state(6) to be set 00:21:50.262 starting I/O failed: -6 00:21:50.262 [2024-12-09 17:32:16.339736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc69c0 is same with the state(6) to be set 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 [2024-12-09 17:32:16.340022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6c70 is same with the state(6) to be set 00:21:50.262 starting I/O failed: -6 00:21:50.262 [2024-12-09 17:32:16.340043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6c70 is same with the state(6) to be set 00:21:50.262 [2024-12-09 17:32:16.340051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6c70 is same with the state(6) to be set 00:21:50.262 [2024-12-09 17:32:16.340058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6c70 is same with the state(6) to be set 00:21:50.262 [2024-12-09 17:32:16.340064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6c70 is same with the state(6) to be set 00:21:50.262 [2024-12-09 17:32:16.340074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6c70 is same with the state(6) to be set 00:21:50.262 [2024-12-09 17:32:16.340080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6c70 is same with the state(6) to be set 00:21:50.262 [2024-12-09 17:32:16.340086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6c70 is same with the state(6) to be set 00:21:50.262 [2024-12-09 17:32:16.340092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6c70 is same with the state(6) to be set 00:21:50.262 [2024-12-09 17:32:16.340190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:50.262 NVMe io qpair process completion error 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 [2024-12-09 17:32:16.343907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 [2024-12-09 17:32:16.344782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.262 starting I/O failed: -6 00:21:50.262 Write completed with error (sct=0, sc=8) 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.345331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8700 is same with the state(6) to be set 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.345354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8700 is same with the state(6) to be set 00:21:50.263 [2024-12-09 17:32:16.345362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8700 is same with the state(6) to be set 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.345370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8700 is same with the state(6) to be set 00:21:50.263 [2024-12-09 17:32:16.345377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8700 is same with the state(6) to be set 00:21:50.263 [2024-12-09 17:32:16.345383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8700 is same with Write completed with error (sct=0, sc=8) 00:21:50.263 the state(6) to be set 00:21:50.263 [2024-12-09 17:32:16.345391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8700 is same with the state(6) to be set 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 [2024-12-09 17:32:16.345401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8700 is same with the state(6) to be set 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.345408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8700 is same with the state(6) to be set 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.345690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8bf0 is same with the state(6) to be set 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.345712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8bf0 is same with the state(6) to be set 00:21:50.263 [2024-12-09 17:32:16.345720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8bf0 is same with the state(6) to be set 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 [2024-12-09 17:32:16.345727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8bf0 is same with the state(6) to be set 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.345734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8bf0 is same with the state(6) to be set 00:21:50.263 [2024-12-09 17:32:16.345741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8bf0 is same with the state(6) to be set 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 [2024-12-09 17:32:16.345747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8bf0 is same with the state(6) to be set 00:21:50.263 [2024-12-09 17:32:16.345754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8bf0 is same with the state(6) to be set 00:21:50.263 [2024-12-09 17:32:16.345768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.346124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc90e0 is same with Write completed with error (sct=0, sc=8) 00:21:50.263 the state(6) to be set 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.346145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc90e0 is same with the state(6) to be set 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 [2024-12-09 17:32:16.346157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc90e0 is same with the state(6) to be set 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.346164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc90e0 is same with the state(6) to be set 00:21:50.263 [2024-12-09 17:32:16.346181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc90e0 is same with Write completed with error (sct=0, sc=8) 00:21:50.263 the state(6) to be set 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.346188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc90e0 is same with the state(6) to be set 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.346448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8230 is same with the state(6) to be set 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.346469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8230 is same with the state(6) to be set 00:21:50.263 [2024-12-09 17:32:16.346476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8230 is same with the state(6) to be set 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 [2024-12-09 17:32:16.346483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8230 is same with the state(6) to be set 00:21:50.263 starting I/O failed: -6 00:21:50.263 [2024-12-09 17:32:16.346489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8230 is same with the state(6) to be set 00:21:50.263 [2024-12-09 17:32:16.346496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8230 is same with the state(6) to be set 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.263 starting I/O failed: -6 00:21:50.263 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 [2024-12-09 17:32:16.347299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:50.264 NVMe io qpair process completion error 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 [2024-12-09 17:32:16.348067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 [2024-12-09 17:32:16.348925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 Write completed with error (sct=0, sc=8) 00:21:50.264 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 [2024-12-09 17:32:16.349952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 [2024-12-09 17:32:16.351537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:50.265 NVMe io qpair process completion error 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 [2024-12-09 17:32:16.352540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.265 Write completed with error (sct=0, sc=8) 00:21:50.265 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 [2024-12-09 17:32:16.353468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 [2024-12-09 17:32:16.354455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.266 starting I/O failed: -6 00:21:50.266 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 [2024-12-09 17:32:16.356669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:50.267 NVMe io qpair process completion error 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 [2024-12-09 17:32:16.357578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 [2024-12-09 17:32:16.358445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.267 starting I/O failed: -6 00:21:50.267 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 [2024-12-09 17:32:16.359441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 [2024-12-09 17:32:16.365448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:50.268 NVMe io qpair process completion error 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 [2024-12-09 17:32:16.366538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.268 starting I/O failed: -6 00:21:50.268 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 [2024-12-09 17:32:16.367326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 [2024-12-09 17:32:16.368337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.269 Write completed with error (sct=0, sc=8) 00:21:50.269 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 [2024-12-09 17:32:16.370795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:50.270 NVMe io qpair process completion error 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 [2024-12-09 17:32:16.374264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.270 starting I/O failed: -6 00:21:50.270 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 [2024-12-09 17:32:16.375139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 [2024-12-09 17:32:16.376182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.271 Write completed with error (sct=0, sc=8) 00:21:50.271 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 [2024-12-09 17:32:16.377996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:50.272 NVMe io qpair process completion error 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 [2024-12-09 17:32:16.379144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 [2024-12-09 17:32:16.380013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.272 Write completed with error (sct=0, sc=8) 00:21:50.272 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 [2024-12-09 17:32:16.381068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 [2024-12-09 17:32:16.387230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:50.273 NVMe io qpair process completion error 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 [2024-12-09 17:32:16.388236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:50.273 starting I/O failed: -6 00:21:50.273 starting I/O failed: -6 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.273 starting I/O failed: -6 00:21:50.273 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 [2024-12-09 17:32:16.389145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 [2024-12-09 17:32:16.390240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.274 Write completed with error (sct=0, sc=8) 00:21:50.274 starting I/O failed: -6 00:21:50.275 Write completed with error (sct=0, sc=8) 00:21:50.275 starting I/O failed: -6 00:21:50.275 Write completed with error (sct=0, sc=8) 00:21:50.275 starting I/O failed: -6 00:21:50.275 Write completed with error (sct=0, sc=8) 00:21:50.275 starting I/O failed: -6 00:21:50.275 Write completed with error (sct=0, sc=8) 00:21:50.275 starting I/O failed: -6 00:21:50.275 Write completed with error (sct=0, sc=8) 00:21:50.275 starting I/O failed: -6 00:21:50.275 Write completed with error (sct=0, sc=8) 00:21:50.275 starting I/O failed: -6 00:21:50.275 Write completed with error (sct=0, sc=8) 00:21:50.275 starting I/O failed: -6 00:21:50.275 [2024-12-09 17:32:16.392697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:50.275 NVMe io qpair process completion error 00:21:50.275 Initializing NVMe Controllers 00:21:50.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:50.275 Controller IO queue size 128, less than required. 00:21:50.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:50.275 Controller IO queue size 128, less than required. 00:21:50.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:50.275 Controller IO queue size 128, less than required. 00:21:50.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:50.275 Controller IO queue size 128, less than required. 00:21:50.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:50.275 Controller IO queue size 128, less than required. 00:21:50.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:50.275 Controller IO queue size 128, less than required. 00:21:50.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:50.275 Controller IO queue size 128, less than required. 00:21:50.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:50.275 Controller IO queue size 128, less than required. 00:21:50.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:50.275 Controller IO queue size 128, less than required. 00:21:50.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:50.275 Controller IO queue size 128, less than required. 00:21:50.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:50.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:50.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:50.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:50.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:50.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:50.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:50.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:50.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:50.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:50.275 Initialization complete. Launching workers. 00:21:50.275 ======================================================== 00:21:50.275 Latency(us) 00:21:50.275 Device Information : IOPS MiB/s Average min max 00:21:50.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2252.10 96.77 56843.41 543.50 111210.67 00:21:50.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2173.70 93.40 58312.60 912.49 135186.98 00:21:50.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2189.56 94.08 58490.58 651.25 117440.64 00:21:50.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2187.41 93.99 57946.36 890.35 105113.65 00:21:50.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2227.90 95.73 56899.99 872.09 105500.16 00:21:50.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2229.40 95.79 56875.05 698.53 104496.39 00:21:50.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2197.27 94.41 57724.48 739.47 103319.08 00:21:50.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2220.61 95.42 57179.09 692.86 103620.86 00:21:50.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2138.15 91.87 59347.17 934.38 110799.56 00:21:50.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2171.35 93.30 58482.36 705.76 110599.55 00:21:50.275 ======================================================== 00:21:50.275 Total : 21987.45 944.77 57798.63 543.50 135186.98 00:21:50.275 00:21:50.275 [2024-12-09 17:32:16.395698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1532890 is same with the state(6) to be set 00:21:50.275 [2024-12-09 17:32:16.395745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534900 is same with the state(6) to be set 00:21:50.275 [2024-12-09 17:32:16.395775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1533740 is same with the state(6) to be set 00:21:50.275 [2024-12-09 17:32:16.395803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534ae0 is same with the state(6) to be set 00:21:50.275 [2024-12-09 17:32:16.395832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1532ef0 is same with the state(6) to be set 00:21:50.275 [2024-12-09 17:32:16.395860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1533410 is same with the state(6) to be set 00:21:50.275 [2024-12-09 17:32:16.395888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1533a70 is same with the state(6) to be set 00:21:50.275 [2024-12-09 17:32:16.395915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1532bc0 is same with the state(6) to be set 00:21:50.275 [2024-12-09 17:32:16.395943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534720 is same with the state(6) to be set 00:21:50.275 [2024-12-09 17:32:16.395973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1532560 is same with the state(6) to be set 00:21:50.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:50.275 17:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:51.212 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1963218 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1963218 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1963218 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.213 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.213 rmmod nvme_tcp 00:21:51.213 rmmod nvme_fabrics 00:21:51.473 rmmod nvme_keyring 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1962943 ']' 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1962943 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1962943 ']' 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1962943 00:21:51.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1962943) - No such process 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1962943 is not found' 00:21:51.473 Process with pid 1962943 is not found 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.473 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.379 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.379 00:21:53.379 real 0m10.425s 00:21:53.379 user 0m27.739s 00:21:53.379 sys 0m5.111s 00:21:53.379 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.379 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:53.379 ************************************ 00:21:53.379 END TEST nvmf_shutdown_tc4 00:21:53.379 ************************************ 00:21:53.379 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:53.379 00:21:53.379 real 0m42.653s 00:21:53.379 user 1m48.145s 00:21:53.379 sys 0m14.025s 00:21:53.379 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.379 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:53.379 ************************************ 00:21:53.379 END TEST nvmf_shutdown 00:21:53.379 ************************************ 00:21:53.638 17:32:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:53.638 17:32:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:53.638 17:32:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.638 17:32:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:53.638 ************************************ 00:21:53.638 START TEST nvmf_nsid 00:21:53.638 ************************************ 00:21:53.638 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:53.638 * Looking for test storage... 00:21:53.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:53.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.638 --rc genhtml_branch_coverage=1 00:21:53.638 --rc genhtml_function_coverage=1 00:21:53.638 --rc genhtml_legend=1 00:21:53.638 --rc geninfo_all_blocks=1 00:21:53.638 --rc geninfo_unexecuted_blocks=1 00:21:53.638 00:21:53.638 ' 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:53.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.638 --rc genhtml_branch_coverage=1 00:21:53.638 --rc genhtml_function_coverage=1 00:21:53.638 --rc genhtml_legend=1 00:21:53.638 --rc geninfo_all_blocks=1 00:21:53.638 --rc geninfo_unexecuted_blocks=1 00:21:53.638 00:21:53.638 ' 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:53.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.638 --rc genhtml_branch_coverage=1 00:21:53.638 --rc genhtml_function_coverage=1 00:21:53.638 --rc genhtml_legend=1 00:21:53.638 --rc geninfo_all_blocks=1 00:21:53.638 --rc geninfo_unexecuted_blocks=1 00:21:53.638 00:21:53.638 ' 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:53.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.638 --rc genhtml_branch_coverage=1 00:21:53.638 --rc genhtml_function_coverage=1 00:21:53.638 --rc genhtml_legend=1 00:21:53.638 --rc geninfo_all_blocks=1 00:21:53.638 --rc geninfo_unexecuted_blocks=1 00:21:53.638 00:21:53.638 ' 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.638 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:53.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:53.898 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:00.467 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.467 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:00.467 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:00.467 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:00.467 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:00.467 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:00.467 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:00.467 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:00.468 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:00.468 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:00.468 Found net devices under 0000:af:00.0: cvl_0_0 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:00.468 Found net devices under 0000:af:00.1: cvl_0_1 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:00.468 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:00.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:22:00.468 00:22:00.468 --- 10.0.0.2 ping statistics --- 00:22:00.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.468 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:22:00.468 00:22:00.468 --- 10.0.0.1 ping statistics --- 00:22:00.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.468 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:00.468 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1967649 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1967649 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1967649 ']' 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:00.469 [2024-12-09 17:32:26.149923] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:22:00.469 [2024-12-09 17:32:26.149973] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.469 [2024-12-09 17:32:26.226745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.469 [2024-12-09 17:32:26.267538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.469 [2024-12-09 17:32:26.267574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.469 [2024-12-09 17:32:26.267583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.469 [2024-12-09 17:32:26.267590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.469 [2024-12-09 17:32:26.267595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.469 [2024-12-09 17:32:26.268084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1967842 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=6fac5d44-7742-4aab-8680-f9e7b8d71e8e 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=8be28076-24f7-4e93-b3e1-7d5a0c94713c 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=742c7b00-edd4-4e52-b363-350be9b1dab1 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:00.469 null0 00:22:00.469 null1 00:22:00.469 [2024-12-09 17:32:26.461148] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:22:00.469 [2024-12-09 17:32:26.461199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967842 ] 00:22:00.469 null2 00:22:00.469 [2024-12-09 17:32:26.468096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.469 [2024-12-09 17:32:26.492304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1967842 /var/tmp/tgt2.sock 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1967842 ']' 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:00.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:00.469 [2024-12-09 17:32:26.535392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.469 [2024-12-09 17:32:26.578475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:00.469 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:00.728 [2024-12-09 17:32:27.078586] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.728 [2024-12-09 17:32:27.094678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:00.728 nvme0n1 nvme0n2 00:22:00.728 nvme1n1 00:22:00.728 17:32:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:00.728 17:32:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:00.728 17:32:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:02.103 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 6fac5d44-7742-4aab-8680-f9e7b8d71e8e 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6fac5d4477424aab8680f9e7b8d71e8e 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6FAC5D4477424AAB8680F9E7B8D71E8E 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 6FAC5D4477424AAB8680F9E7B8D71E8E == \6\F\A\C\5\D\4\4\7\7\4\2\4\A\A\B\8\6\8\0\F\9\E\7\B\8\D\7\1\E\8\E ]] 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 8be28076-24f7-4e93-b3e1-7d5a0c94713c 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8be2807624f74e93b3e17d5a0c94713c 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8BE2807624F74E93B3E17D5A0C94713C 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 8BE2807624F74E93B3E17D5A0C94713C == \8\B\E\2\8\0\7\6\2\4\F\7\4\E\9\3\B\3\E\1\7\D\5\A\0\C\9\4\7\1\3\C ]] 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 742c7b00-edd4-4e52-b363-350be9b1dab1 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=742c7b00edd44e52b363350be9b1dab1 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 742C7B00EDD44E52B363350BE9B1DAB1 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 742C7B00EDD44E52B363350BE9B1DAB1 == \7\4\2\C\7\B\0\0\E\D\D\4\4\E\5\2\B\3\6\3\3\5\0\B\E\9\B\1\D\A\B\1 ]] 00:22:03.040 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1967842 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1967842 ']' 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1967842 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967842 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967842' 00:22:03.299 killing process with pid 1967842 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1967842 00:22:03.299 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1967842 00:22:03.558 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:03.558 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:03.558 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:03.558 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:03.558 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:03.558 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:03.558 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:03.558 rmmod nvme_tcp 00:22:03.558 rmmod nvme_fabrics 00:22:03.558 rmmod nvme_keyring 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1967649 ']' 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1967649 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1967649 ']' 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1967649 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967649 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967649' 00:22:03.558 killing process with pid 1967649 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1967649 00:22:03.558 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1967649 00:22:03.817 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:03.817 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:03.817 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:03.817 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:03.817 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:03.817 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:03.817 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:03.817 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:03.817 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:03.817 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.817 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.817 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.352 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:06.352 00:22:06.352 real 0m12.321s 00:22:06.352 user 0m9.523s 00:22:06.352 sys 0m5.542s 00:22:06.352 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.352 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:06.352 ************************************ 00:22:06.352 END TEST nvmf_nsid 00:22:06.352 ************************************ 00:22:06.352 17:32:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:06.352 00:22:06.352 real 12m6.175s 00:22:06.352 user 26m8.665s 00:22:06.352 sys 3m43.595s 00:22:06.352 17:32:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.352 17:32:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:06.352 ************************************ 00:22:06.352 END TEST nvmf_target_extra 00:22:06.352 ************************************ 00:22:06.352 17:32:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:06.352 17:32:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:06.352 17:32:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.352 17:32:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:06.352 ************************************ 00:22:06.352 START TEST nvmf_host 00:22:06.352 ************************************ 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:06.352 * Looking for test storage... 00:22:06.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.352 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.353 --rc genhtml_branch_coverage=1 00:22:06.353 --rc genhtml_function_coverage=1 00:22:06.353 --rc genhtml_legend=1 00:22:06.353 --rc geninfo_all_blocks=1 00:22:06.353 --rc geninfo_unexecuted_blocks=1 00:22:06.353 00:22:06.353 ' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.353 --rc genhtml_branch_coverage=1 00:22:06.353 --rc genhtml_function_coverage=1 00:22:06.353 --rc genhtml_legend=1 00:22:06.353 --rc geninfo_all_blocks=1 00:22:06.353 --rc geninfo_unexecuted_blocks=1 00:22:06.353 00:22:06.353 ' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.353 --rc genhtml_branch_coverage=1 00:22:06.353 --rc genhtml_function_coverage=1 00:22:06.353 --rc genhtml_legend=1 00:22:06.353 --rc geninfo_all_blocks=1 00:22:06.353 --rc geninfo_unexecuted_blocks=1 00:22:06.353 00:22:06.353 ' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.353 --rc genhtml_branch_coverage=1 00:22:06.353 --rc genhtml_function_coverage=1 00:22:06.353 --rc genhtml_legend=1 00:22:06.353 --rc geninfo_all_blocks=1 00:22:06.353 --rc geninfo_unexecuted_blocks=1 00:22:06.353 00:22:06.353 ' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:06.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.353 ************************************ 00:22:06.353 START TEST nvmf_multicontroller 00:22:06.353 ************************************ 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:06.353 * Looking for test storage... 00:22:06.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.353 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:06.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.354 --rc genhtml_branch_coverage=1 00:22:06.354 --rc genhtml_function_coverage=1 00:22:06.354 --rc genhtml_legend=1 00:22:06.354 --rc geninfo_all_blocks=1 00:22:06.354 --rc geninfo_unexecuted_blocks=1 00:22:06.354 00:22:06.354 ' 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:06.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.354 --rc genhtml_branch_coverage=1 00:22:06.354 --rc genhtml_function_coverage=1 00:22:06.354 --rc genhtml_legend=1 00:22:06.354 --rc geninfo_all_blocks=1 00:22:06.354 --rc geninfo_unexecuted_blocks=1 00:22:06.354 00:22:06.354 ' 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:06.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.354 --rc genhtml_branch_coverage=1 00:22:06.354 --rc genhtml_function_coverage=1 00:22:06.354 --rc genhtml_legend=1 00:22:06.354 --rc geninfo_all_blocks=1 00:22:06.354 --rc geninfo_unexecuted_blocks=1 00:22:06.354 00:22:06.354 ' 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:06.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.354 --rc genhtml_branch_coverage=1 00:22:06.354 --rc genhtml_function_coverage=1 00:22:06.354 --rc genhtml_legend=1 00:22:06.354 --rc geninfo_all_blocks=1 00:22:06.354 --rc geninfo_unexecuted_blocks=1 00:22:06.354 00:22:06.354 ' 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:06.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:06.354 17:32:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:12.924 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:12.925 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:12.925 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:12.925 Found net devices under 0000:af:00.0: cvl_0_0 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:12.925 Found net devices under 0000:af:00.1: cvl_0_1 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:12.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:22:12.925 00:22:12.925 --- 10.0.0.2 ping statistics --- 00:22:12.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.925 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:22:12.925 00:22:12.925 --- 10.0.0.1 ping statistics --- 00:22:12.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.925 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1971917 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1971917 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1971917 ']' 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.925 17:32:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.925 [2024-12-09 17:32:38.830696] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:22:12.925 [2024-12-09 17:32:38.830747] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.925 [2024-12-09 17:32:38.910641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:12.925 [2024-12-09 17:32:38.953262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.925 [2024-12-09 17:32:38.953300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.925 [2024-12-09 17:32:38.953307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.925 [2024-12-09 17:32:38.953313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.925 [2024-12-09 17:32:38.953318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.925 [2024-12-09 17:32:38.954685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.925 [2024-12-09 17:32:38.954793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.925 [2024-12-09 17:32:38.954793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.925 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.925 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 [2024-12-09 17:32:39.091621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 Malloc0 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 [2024-12-09 17:32:39.158574] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 [2024-12-09 17:32:39.166514] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 Malloc1 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1972103 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1972103 /var/tmp/bdevperf.sock 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1972103 ']' 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.926 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.186 NVMe0n1 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.186 1 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.186 request: 00:22:13.186 { 00:22:13.186 "name": "NVMe0", 00:22:13.186 "trtype": "tcp", 00:22:13.186 "traddr": "10.0.0.2", 00:22:13.186 "adrfam": "ipv4", 00:22:13.186 "trsvcid": "4420", 00:22:13.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.186 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:13.186 "hostaddr": "10.0.0.1", 00:22:13.186 "prchk_reftag": false, 00:22:13.186 "prchk_guard": false, 00:22:13.186 "hdgst": false, 00:22:13.186 "ddgst": false, 00:22:13.186 "allow_unrecognized_csi": false, 00:22:13.186 "method": "bdev_nvme_attach_controller", 00:22:13.186 "req_id": 1 00:22:13.186 } 00:22:13.186 Got JSON-RPC error response 00:22:13.186 response: 00:22:13.186 { 00:22:13.186 "code": -114, 00:22:13.186 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:13.186 } 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:13.186 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.445 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.445 request: 00:22:13.445 { 00:22:13.445 "name": "NVMe0", 00:22:13.445 "trtype": "tcp", 00:22:13.446 "traddr": "10.0.0.2", 00:22:13.446 "adrfam": "ipv4", 00:22:13.446 "trsvcid": "4420", 00:22:13.446 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:13.446 "hostaddr": "10.0.0.1", 00:22:13.446 "prchk_reftag": false, 00:22:13.446 "prchk_guard": false, 00:22:13.446 "hdgst": false, 00:22:13.446 "ddgst": false, 00:22:13.446 "allow_unrecognized_csi": false, 00:22:13.446 "method": "bdev_nvme_attach_controller", 00:22:13.446 "req_id": 1 00:22:13.446 } 00:22:13.446 Got JSON-RPC error response 00:22:13.446 response: 00:22:13.446 { 00:22:13.446 "code": -114, 00:22:13.446 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:13.446 } 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.446 request: 00:22:13.446 { 00:22:13.446 "name": "NVMe0", 00:22:13.446 "trtype": "tcp", 00:22:13.446 "traddr": "10.0.0.2", 00:22:13.446 "adrfam": "ipv4", 00:22:13.446 "trsvcid": "4420", 00:22:13.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.446 "hostaddr": "10.0.0.1", 00:22:13.446 "prchk_reftag": false, 00:22:13.446 "prchk_guard": false, 00:22:13.446 "hdgst": false, 00:22:13.446 "ddgst": false, 00:22:13.446 "multipath": "disable", 00:22:13.446 "allow_unrecognized_csi": false, 00:22:13.446 "method": "bdev_nvme_attach_controller", 00:22:13.446 "req_id": 1 00:22:13.446 } 00:22:13.446 Got JSON-RPC error response 00:22:13.446 response: 00:22:13.446 { 00:22:13.446 "code": -114, 00:22:13.446 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:13.446 } 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.446 request: 00:22:13.446 { 00:22:13.446 "name": "NVMe0", 00:22:13.446 "trtype": "tcp", 00:22:13.446 "traddr": "10.0.0.2", 00:22:13.446 "adrfam": "ipv4", 00:22:13.446 "trsvcid": "4420", 00:22:13.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.446 "hostaddr": "10.0.0.1", 00:22:13.446 "prchk_reftag": false, 00:22:13.446 "prchk_guard": false, 00:22:13.446 "hdgst": false, 00:22:13.446 "ddgst": false, 00:22:13.446 "multipath": "failover", 00:22:13.446 "allow_unrecognized_csi": false, 00:22:13.446 "method": "bdev_nvme_attach_controller", 00:22:13.446 "req_id": 1 00:22:13.446 } 00:22:13.446 Got JSON-RPC error response 00:22:13.446 response: 00:22:13.446 { 00:22:13.446 "code": -114, 00:22:13.446 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:13.446 } 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.446 NVMe0n1 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.446 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.446 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.705 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.705 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:13.705 17:32:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.641 { 00:22:14.641 "results": [ 00:22:14.641 { 00:22:14.641 "job": "NVMe0n1", 00:22:14.641 "core_mask": "0x1", 00:22:14.641 "workload": "write", 00:22:14.641 "status": "finished", 00:22:14.641 "queue_depth": 128, 00:22:14.641 "io_size": 4096, 00:22:14.641 "runtime": 1.002568, 00:22:14.641 "iops": 25027.72879246096, 00:22:14.641 "mibps": 97.76456559555062, 00:22:14.641 "io_failed": 0, 00:22:14.641 "io_timeout": 0, 00:22:14.641 "avg_latency_us": 5107.470374773216, 00:22:14.641 "min_latency_us": 3027.1390476190477, 00:22:14.641 "max_latency_us": 12108.55619047619 00:22:14.641 } 00:22:14.641 ], 00:22:14.641 "core_count": 1 00:22:14.641 } 00:22:14.641 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:14.641 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.641 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:14.641 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.641 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:14.641 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1972103 00:22:14.641 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1972103 ']' 00:22:14.641 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1972103 00:22:14.641 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:14.641 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.641 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1972103 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1972103' 00:22:14.900 killing process with pid 1972103 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1972103 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1972103 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:14.900 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:14.900 [2024-12-09 17:32:39.271289] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:22:14.900 [2024-12-09 17:32:39.271337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972103 ] 00:22:14.900 [2024-12-09 17:32:39.345323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.900 [2024-12-09 17:32:39.384722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.900 [2024-12-09 17:32:39.972303] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 3d0a8af5-e602-4d4c-9f0a-13520cfcb9e2 already exists 00:22:14.900 [2024-12-09 17:32:39.972330] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:3d0a8af5-e602-4d4c-9f0a-13520cfcb9e2 alias for bdev NVMe1n1 00:22:14.900 [2024-12-09 17:32:39.972338] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:14.900 Running I/O for 1 seconds... 00:22:14.900 24964.00 IOPS, 97.52 MiB/s 00:22:14.900 Latency(us) 00:22:14.900 [2024-12-09T16:32:41.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.900 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:14.900 NVMe0n1 : 1.00 25027.73 97.76 0.00 0.00 5107.47 3027.14 12108.56 00:22:14.900 [2024-12-09T16:32:41.440Z] =================================================================================================================== 00:22:14.900 [2024-12-09T16:32:41.440Z] Total : 25027.73 97.76 0.00 0.00 5107.47 3027.14 12108.56 00:22:14.900 Received shutdown signal, test time was about 1.000000 seconds 00:22:14.900 00:22:14.900 Latency(us) 00:22:14.900 [2024-12-09T16:32:41.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.900 [2024-12-09T16:32:41.440Z] =================================================================================================================== 00:22:14.900 [2024-12-09T16:32:41.440Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:14.900 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:14.900 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:14.901 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:14.901 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.901 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:14.901 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.901 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:14.901 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.901 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.901 rmmod nvme_tcp 00:22:14.901 rmmod nvme_fabrics 00:22:14.901 rmmod nvme_keyring 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1971917 ']' 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1971917 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1971917 ']' 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1971917 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1971917 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1971917' 00:22:15.159 killing process with pid 1971917 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1971917 00:22:15.159 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1971917 00:22:15.418 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:15.418 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:15.418 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:15.418 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:15.418 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:15.418 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:15.418 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:15.418 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:15.418 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:15.418 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.418 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.418 17:32:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.322 17:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.322 00:22:17.322 real 0m11.134s 00:22:17.322 user 0m12.253s 00:22:17.322 sys 0m5.166s 00:22:17.322 17:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.322 17:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.322 ************************************ 00:22:17.322 END TEST nvmf_multicontroller 00:22:17.322 ************************************ 00:22:17.322 17:32:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:17.322 17:32:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:17.322 17:32:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.322 17:32:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.582 ************************************ 00:22:17.582 START TEST nvmf_aer 00:22:17.582 ************************************ 00:22:17.582 17:32:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:17.582 * Looking for test storage... 00:22:17.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.582 17:32:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:17.582 17:32:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:22:17.582 17:32:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:17.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.582 --rc genhtml_branch_coverage=1 00:22:17.582 --rc genhtml_function_coverage=1 00:22:17.582 --rc genhtml_legend=1 00:22:17.582 --rc geninfo_all_blocks=1 00:22:17.582 --rc geninfo_unexecuted_blocks=1 00:22:17.582 00:22:17.582 ' 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:17.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.582 --rc genhtml_branch_coverage=1 00:22:17.582 --rc genhtml_function_coverage=1 00:22:17.582 --rc genhtml_legend=1 00:22:17.582 --rc geninfo_all_blocks=1 00:22:17.582 --rc geninfo_unexecuted_blocks=1 00:22:17.582 00:22:17.582 ' 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:17.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.582 --rc genhtml_branch_coverage=1 00:22:17.582 --rc genhtml_function_coverage=1 00:22:17.582 --rc genhtml_legend=1 00:22:17.582 --rc geninfo_all_blocks=1 00:22:17.582 --rc geninfo_unexecuted_blocks=1 00:22:17.582 00:22:17.582 ' 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:17.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.582 --rc genhtml_branch_coverage=1 00:22:17.582 --rc genhtml_function_coverage=1 00:22:17.582 --rc genhtml_legend=1 00:22:17.582 --rc geninfo_all_blocks=1 00:22:17.582 --rc geninfo_unexecuted_blocks=1 00:22:17.582 00:22:17.582 ' 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.582 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.583 17:32:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.151 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.151 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:24.151 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:24.151 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:24.151 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:24.151 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:24.151 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:24.152 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:24.152 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:24.152 Found net devices under 0000:af:00.0: cvl_0_0 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:24.152 Found net devices under 0000:af:00.1: cvl_0_1 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:24.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:22:24.152 00:22:24.152 --- 10.0.0.2 ping statistics --- 00:22:24.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.152 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:24.152 00:22:24.152 --- 10.0.0.1 ping statistics --- 00:22:24.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.152 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1975820 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1975820 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1975820 ']' 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.152 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.153 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.153 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.153 17:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 [2024-12-09 17:32:50.000385] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:22:24.153 [2024-12-09 17:32:50.000431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.153 [2024-12-09 17:32:50.081721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.153 [2024-12-09 17:32:50.124336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.153 [2024-12-09 17:32:50.124374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.153 [2024-12-09 17:32:50.124382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.153 [2024-12-09 17:32:50.124388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.153 [2024-12-09 17:32:50.124393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.153 [2024-12-09 17:32:50.125856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.153 [2024-12-09 17:32:50.125894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.153 [2024-12-09 17:32:50.125999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.153 [2024-12-09 17:32:50.126000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 [2024-12-09 17:32:50.271509] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 Malloc0 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 [2024-12-09 17:32:50.335900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 [ 00:22:24.153 { 00:22:24.153 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:24.153 "subtype": "Discovery", 00:22:24.153 "listen_addresses": [], 00:22:24.153 "allow_any_host": true, 00:22:24.153 "hosts": [] 00:22:24.153 }, 00:22:24.153 { 00:22:24.153 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.153 "subtype": "NVMe", 00:22:24.153 "listen_addresses": [ 00:22:24.153 { 00:22:24.153 "trtype": "TCP", 00:22:24.153 "adrfam": "IPv4", 00:22:24.153 "traddr": "10.0.0.2", 00:22:24.153 "trsvcid": "4420" 00:22:24.153 } 00:22:24.153 ], 00:22:24.153 "allow_any_host": true, 00:22:24.153 "hosts": [], 00:22:24.153 "serial_number": "SPDK00000000000001", 00:22:24.153 "model_number": "SPDK bdev Controller", 00:22:24.153 "max_namespaces": 2, 00:22:24.153 "min_cntlid": 1, 00:22:24.153 "max_cntlid": 65519, 00:22:24.153 "namespaces": [ 00:22:24.153 { 00:22:24.153 "nsid": 1, 00:22:24.153 "bdev_name": "Malloc0", 00:22:24.153 "name": "Malloc0", 00:22:24.153 "nguid": "74FF00F9518F41B8873671EFA0D1407E", 00:22:24.153 "uuid": "74ff00f9-518f-41b8-8736-71efa0d1407e" 00:22:24.153 } 00:22:24.153 ] 00:22:24.153 } 00:22:24.153 ] 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1976045 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 Malloc1 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 Asynchronous Event Request test 00:22:24.153 Attaching to 10.0.0.2 00:22:24.153 Attached to 10.0.0.2 00:22:24.153 Registering asynchronous event callbacks... 00:22:24.153 Starting namespace attribute notice tests for all controllers... 00:22:24.153 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:24.153 aer_cb - Changed Namespace 00:22:24.153 Cleaning up... 00:22:24.153 [ 00:22:24.153 { 00:22:24.153 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:24.153 "subtype": "Discovery", 00:22:24.153 "listen_addresses": [], 00:22:24.153 "allow_any_host": true, 00:22:24.153 "hosts": [] 00:22:24.153 }, 00:22:24.153 { 00:22:24.153 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.153 "subtype": "NVMe", 00:22:24.153 "listen_addresses": [ 00:22:24.153 { 00:22:24.153 "trtype": "TCP", 00:22:24.153 "adrfam": "IPv4", 00:22:24.153 "traddr": "10.0.0.2", 00:22:24.153 "trsvcid": "4420" 00:22:24.153 } 00:22:24.153 ], 00:22:24.153 "allow_any_host": true, 00:22:24.153 "hosts": [], 00:22:24.153 "serial_number": "SPDK00000000000001", 00:22:24.153 "model_number": "SPDK bdev Controller", 00:22:24.153 "max_namespaces": 2, 00:22:24.153 "min_cntlid": 1, 00:22:24.153 "max_cntlid": 65519, 00:22:24.153 "namespaces": [ 00:22:24.153 { 00:22:24.153 "nsid": 1, 00:22:24.153 "bdev_name": "Malloc0", 00:22:24.153 "name": "Malloc0", 00:22:24.153 "nguid": "74FF00F9518F41B8873671EFA0D1407E", 00:22:24.153 "uuid": "74ff00f9-518f-41b8-8736-71efa0d1407e" 00:22:24.153 }, 00:22:24.153 { 00:22:24.153 "nsid": 2, 00:22:24.153 "bdev_name": "Malloc1", 00:22:24.153 "name": "Malloc1", 00:22:24.153 "nguid": "76F9D8C51A444D07B8E491FF07A98A6A", 00:22:24.153 "uuid": "76f9d8c5-1a44-4d07-b8e4-91ff07a98a6a" 00:22:24.153 } 00:22:24.153 ] 00:22:24.153 } 00:22:24.153 ] 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.153 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1976045 00:22:24.154 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:24.154 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.154 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.154 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.154 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:24.154 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.154 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.154 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.154 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:24.154 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.154 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.413 rmmod nvme_tcp 00:22:24.413 rmmod nvme_fabrics 00:22:24.413 rmmod nvme_keyring 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1975820 ']' 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1975820 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1975820 ']' 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1975820 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975820 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975820' 00:22:24.413 killing process with pid 1975820 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1975820 00:22:24.413 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1975820 00:22:24.672 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.672 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.672 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.672 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:24.672 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:24.672 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.672 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.672 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.672 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.672 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.672 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.672 17:32:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.575 17:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.575 00:22:26.575 real 0m9.172s 00:22:26.575 user 0m5.072s 00:22:26.575 sys 0m4.856s 00:22:26.575 17:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.575 17:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:26.575 ************************************ 00:22:26.575 END TEST nvmf_aer 00:22:26.575 ************************************ 00:22:26.575 17:32:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:26.575 17:32:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:26.575 17:32:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.575 17:32:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.575 ************************************ 00:22:26.575 START TEST nvmf_async_init 00:22:26.575 ************************************ 00:22:26.575 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:26.835 * Looking for test storage... 00:22:26.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.835 --rc genhtml_branch_coverage=1 00:22:26.835 --rc genhtml_function_coverage=1 00:22:26.835 --rc genhtml_legend=1 00:22:26.835 --rc geninfo_all_blocks=1 00:22:26.835 --rc geninfo_unexecuted_blocks=1 00:22:26.835 00:22:26.835 ' 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.835 --rc genhtml_branch_coverage=1 00:22:26.835 --rc genhtml_function_coverage=1 00:22:26.835 --rc genhtml_legend=1 00:22:26.835 --rc geninfo_all_blocks=1 00:22:26.835 --rc geninfo_unexecuted_blocks=1 00:22:26.835 00:22:26.835 ' 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.835 --rc genhtml_branch_coverage=1 00:22:26.835 --rc genhtml_function_coverage=1 00:22:26.835 --rc genhtml_legend=1 00:22:26.835 --rc geninfo_all_blocks=1 00:22:26.835 --rc geninfo_unexecuted_blocks=1 00:22:26.835 00:22:26.835 ' 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.835 --rc genhtml_branch_coverage=1 00:22:26.835 --rc genhtml_function_coverage=1 00:22:26.835 --rc genhtml_legend=1 00:22:26.835 --rc geninfo_all_blocks=1 00:22:26.835 --rc geninfo_unexecuted_blocks=1 00:22:26.835 00:22:26.835 ' 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:26.835 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=df2f2d6666d247d7b75b5450d25112e6 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.836 17:32:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:33.406 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.406 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:33.407 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:33.407 Found net devices under 0000:af:00.0: cvl_0_0 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:33.407 Found net devices under 0000:af:00.1: cvl_0_1 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.407 17:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:22:33.407 00:22:33.407 --- 10.0.0.2 ping statistics --- 00:22:33.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.407 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:22:33.407 00:22:33.407 --- 10.0.0.1 ping statistics --- 00:22:33.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.407 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1979517 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1979517 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1979517 ']' 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.407 [2024-12-09 17:32:59.250228] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:22:33.407 [2024-12-09 17:32:59.250269] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.407 [2024-12-09 17:32:59.329380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.407 [2024-12-09 17:32:59.368210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.407 [2024-12-09 17:32:59.368244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.407 [2024-12-09 17:32:59.368251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.407 [2024-12-09 17:32:59.368258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.407 [2024-12-09 17:32:59.368263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.407 [2024-12-09 17:32:59.368738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.407 [2024-12-09 17:32:59.503101] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.407 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.408 null0 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g df2f2d6666d247d7b75b5450d25112e6 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.408 [2024-12-09 17:32:59.551352] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.408 nvme0n1 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.408 [ 00:22:33.408 { 00:22:33.408 "name": "nvme0n1", 00:22:33.408 "aliases": [ 00:22:33.408 "df2f2d66-66d2-47d7-b75b-5450d25112e6" 00:22:33.408 ], 00:22:33.408 "product_name": "NVMe disk", 00:22:33.408 "block_size": 512, 00:22:33.408 "num_blocks": 2097152, 00:22:33.408 "uuid": "df2f2d66-66d2-47d7-b75b-5450d25112e6", 00:22:33.408 "numa_id": 1, 00:22:33.408 "assigned_rate_limits": { 00:22:33.408 "rw_ios_per_sec": 0, 00:22:33.408 "rw_mbytes_per_sec": 0, 00:22:33.408 "r_mbytes_per_sec": 0, 00:22:33.408 "w_mbytes_per_sec": 0 00:22:33.408 }, 00:22:33.408 "claimed": false, 00:22:33.408 "zoned": false, 00:22:33.408 "supported_io_types": { 00:22:33.408 "read": true, 00:22:33.408 "write": true, 00:22:33.408 "unmap": false, 00:22:33.408 "flush": true, 00:22:33.408 "reset": true, 00:22:33.408 "nvme_admin": true, 00:22:33.408 "nvme_io": true, 00:22:33.408 "nvme_io_md": false, 00:22:33.408 "write_zeroes": true, 00:22:33.408 "zcopy": false, 00:22:33.408 "get_zone_info": false, 00:22:33.408 "zone_management": false, 00:22:33.408 "zone_append": false, 00:22:33.408 "compare": true, 00:22:33.408 "compare_and_write": true, 00:22:33.408 "abort": true, 00:22:33.408 "seek_hole": false, 00:22:33.408 "seek_data": false, 00:22:33.408 "copy": true, 00:22:33.408 "nvme_iov_md": false 00:22:33.408 }, 00:22:33.408 "memory_domains": [ 00:22:33.408 { 00:22:33.408 "dma_device_id": "system", 00:22:33.408 "dma_device_type": 1 00:22:33.408 } 00:22:33.408 ], 00:22:33.408 "driver_specific": { 00:22:33.408 "nvme": [ 00:22:33.408 { 00:22:33.408 "trid": { 00:22:33.408 "trtype": "TCP", 00:22:33.408 "adrfam": "IPv4", 00:22:33.408 "traddr": "10.0.0.2", 00:22:33.408 "trsvcid": "4420", 00:22:33.408 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:33.408 }, 00:22:33.408 "ctrlr_data": { 00:22:33.408 "cntlid": 1, 00:22:33.408 "vendor_id": "0x8086", 00:22:33.408 "model_number": "SPDK bdev Controller", 00:22:33.408 "serial_number": "00000000000000000000", 00:22:33.408 "firmware_revision": "25.01", 00:22:33.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:33.408 "oacs": { 00:22:33.408 "security": 0, 00:22:33.408 "format": 0, 00:22:33.408 "firmware": 0, 00:22:33.408 "ns_manage": 0 00:22:33.408 }, 00:22:33.408 "multi_ctrlr": true, 00:22:33.408 "ana_reporting": false 00:22:33.408 }, 00:22:33.408 "vs": { 00:22:33.408 "nvme_version": "1.3" 00:22:33.408 }, 00:22:33.408 "ns_data": { 00:22:33.408 "id": 1, 00:22:33.408 "can_share": true 00:22:33.408 } 00:22:33.408 } 00:22:33.408 ], 00:22:33.408 "mp_policy": "active_passive" 00:22:33.408 } 00:22:33.408 } 00:22:33.408 ] 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.408 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.408 [2024-12-09 17:32:59.811880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:33.408 [2024-12-09 17:32:59.811935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11571c0 (9): Bad file descriptor 00:22:33.408 [2024-12-09 17:32:59.944260] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:33.667 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.667 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:33.667 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.667 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.667 [ 00:22:33.667 { 00:22:33.667 "name": "nvme0n1", 00:22:33.667 "aliases": [ 00:22:33.667 "df2f2d66-66d2-47d7-b75b-5450d25112e6" 00:22:33.667 ], 00:22:33.667 "product_name": "NVMe disk", 00:22:33.667 "block_size": 512, 00:22:33.667 "num_blocks": 2097152, 00:22:33.667 "uuid": "df2f2d66-66d2-47d7-b75b-5450d25112e6", 00:22:33.667 "numa_id": 1, 00:22:33.667 "assigned_rate_limits": { 00:22:33.667 "rw_ios_per_sec": 0, 00:22:33.667 "rw_mbytes_per_sec": 0, 00:22:33.667 "r_mbytes_per_sec": 0, 00:22:33.667 "w_mbytes_per_sec": 0 00:22:33.667 }, 00:22:33.667 "claimed": false, 00:22:33.667 "zoned": false, 00:22:33.667 "supported_io_types": { 00:22:33.667 "read": true, 00:22:33.667 "write": true, 00:22:33.667 "unmap": false, 00:22:33.667 "flush": true, 00:22:33.667 "reset": true, 00:22:33.667 "nvme_admin": true, 00:22:33.667 "nvme_io": true, 00:22:33.667 "nvme_io_md": false, 00:22:33.667 "write_zeroes": true, 00:22:33.667 "zcopy": false, 00:22:33.667 "get_zone_info": false, 00:22:33.667 "zone_management": false, 00:22:33.667 "zone_append": false, 00:22:33.667 "compare": true, 00:22:33.667 "compare_and_write": true, 00:22:33.667 "abort": true, 00:22:33.667 "seek_hole": false, 00:22:33.667 "seek_data": false, 00:22:33.667 "copy": true, 00:22:33.667 "nvme_iov_md": false 00:22:33.667 }, 00:22:33.667 "memory_domains": [ 00:22:33.667 { 00:22:33.667 "dma_device_id": "system", 00:22:33.667 "dma_device_type": 1 00:22:33.667 } 00:22:33.667 ], 00:22:33.667 "driver_specific": { 00:22:33.667 "nvme": [ 00:22:33.667 { 00:22:33.667 "trid": { 00:22:33.667 "trtype": "TCP", 00:22:33.667 "adrfam": "IPv4", 00:22:33.667 "traddr": "10.0.0.2", 00:22:33.667 "trsvcid": "4420", 00:22:33.667 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:33.667 }, 00:22:33.667 "ctrlr_data": { 00:22:33.667 "cntlid": 2, 00:22:33.668 "vendor_id": "0x8086", 00:22:33.668 "model_number": "SPDK bdev Controller", 00:22:33.668 "serial_number": "00000000000000000000", 00:22:33.668 "firmware_revision": "25.01", 00:22:33.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:33.668 "oacs": { 00:22:33.668 "security": 0, 00:22:33.668 "format": 0, 00:22:33.668 "firmware": 0, 00:22:33.668 "ns_manage": 0 00:22:33.668 }, 00:22:33.668 "multi_ctrlr": true, 00:22:33.668 "ana_reporting": false 00:22:33.668 }, 00:22:33.668 "vs": { 00:22:33.668 "nvme_version": "1.3" 00:22:33.668 }, 00:22:33.668 "ns_data": { 00:22:33.668 "id": 1, 00:22:33.668 "can_share": true 00:22:33.668 } 00:22:33.668 } 00:22:33.668 ], 00:22:33.668 "mp_policy": "active_passive" 00:22:33.668 } 00:22:33.668 } 00:22:33.668 ] 00:22:33.668 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.668 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.668 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.668 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.668 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.668 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:33.668 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.450lg1n67E 00:22:33.668 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:33.668 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.450lg1n67E 00:22:33.668 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.450lg1n67E 00:22:33.668 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.668 17:32:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.668 [2024-12-09 17:33:00.020499] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:33.668 [2024-12-09 17:33:00.020625] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.668 [2024-12-09 17:33:00.040562] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:33.668 nvme0n1 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.668 [ 00:22:33.668 { 00:22:33.668 "name": "nvme0n1", 00:22:33.668 "aliases": [ 00:22:33.668 "df2f2d66-66d2-47d7-b75b-5450d25112e6" 00:22:33.668 ], 00:22:33.668 "product_name": "NVMe disk", 00:22:33.668 "block_size": 512, 00:22:33.668 "num_blocks": 2097152, 00:22:33.668 "uuid": "df2f2d66-66d2-47d7-b75b-5450d25112e6", 00:22:33.668 "numa_id": 1, 00:22:33.668 "assigned_rate_limits": { 00:22:33.668 "rw_ios_per_sec": 0, 00:22:33.668 "rw_mbytes_per_sec": 0, 00:22:33.668 "r_mbytes_per_sec": 0, 00:22:33.668 "w_mbytes_per_sec": 0 00:22:33.668 }, 00:22:33.668 "claimed": false, 00:22:33.668 "zoned": false, 00:22:33.668 "supported_io_types": { 00:22:33.668 "read": true, 00:22:33.668 "write": true, 00:22:33.668 "unmap": false, 00:22:33.668 "flush": true, 00:22:33.668 "reset": true, 00:22:33.668 "nvme_admin": true, 00:22:33.668 "nvme_io": true, 00:22:33.668 "nvme_io_md": false, 00:22:33.668 "write_zeroes": true, 00:22:33.668 "zcopy": false, 00:22:33.668 "get_zone_info": false, 00:22:33.668 "zone_management": false, 00:22:33.668 "zone_append": false, 00:22:33.668 "compare": true, 00:22:33.668 "compare_and_write": true, 00:22:33.668 "abort": true, 00:22:33.668 "seek_hole": false, 00:22:33.668 "seek_data": false, 00:22:33.668 "copy": true, 00:22:33.668 "nvme_iov_md": false 00:22:33.668 }, 00:22:33.668 "memory_domains": [ 00:22:33.668 { 00:22:33.668 "dma_device_id": "system", 00:22:33.668 "dma_device_type": 1 00:22:33.668 } 00:22:33.668 ], 00:22:33.668 "driver_specific": { 00:22:33.668 "nvme": [ 00:22:33.668 { 00:22:33.668 "trid": { 00:22:33.668 "trtype": "TCP", 00:22:33.668 "adrfam": "IPv4", 00:22:33.668 "traddr": "10.0.0.2", 00:22:33.668 "trsvcid": "4421", 00:22:33.668 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:33.668 }, 00:22:33.668 "ctrlr_data": { 00:22:33.668 "cntlid": 3, 00:22:33.668 "vendor_id": "0x8086", 00:22:33.668 "model_number": "SPDK bdev Controller", 00:22:33.668 "serial_number": "00000000000000000000", 00:22:33.668 "firmware_revision": "25.01", 00:22:33.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:33.668 "oacs": { 00:22:33.668 "security": 0, 00:22:33.668 "format": 0, 00:22:33.668 "firmware": 0, 00:22:33.668 "ns_manage": 0 00:22:33.668 }, 00:22:33.668 "multi_ctrlr": true, 00:22:33.668 "ana_reporting": false 00:22:33.668 }, 00:22:33.668 "vs": { 00:22:33.668 "nvme_version": "1.3" 00:22:33.668 }, 00:22:33.668 "ns_data": { 00:22:33.668 "id": 1, 00:22:33.668 "can_share": true 00:22:33.668 } 00:22:33.668 } 00:22:33.668 ], 00:22:33.668 "mp_policy": "active_passive" 00:22:33.668 } 00:22:33.668 } 00:22:33.668 ] 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.450lg1n67E 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.668 rmmod nvme_tcp 00:22:33.668 rmmod nvme_fabrics 00:22:33.668 rmmod nvme_keyring 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1979517 ']' 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1979517 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1979517 ']' 00:22:33.668 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1979517 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1979517 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1979517' 00:22:33.927 killing process with pid 1979517 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1979517 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1979517 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.927 17:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.462 00:22:36.462 real 0m9.363s 00:22:36.462 user 0m3.110s 00:22:36.462 sys 0m4.675s 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:36.462 ************************************ 00:22:36.462 END TEST nvmf_async_init 00:22:36.462 ************************************ 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.462 ************************************ 00:22:36.462 START TEST dma 00:22:36.462 ************************************ 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:36.462 * Looking for test storage... 00:22:36.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:36.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.462 --rc genhtml_branch_coverage=1 00:22:36.462 --rc genhtml_function_coverage=1 00:22:36.462 --rc genhtml_legend=1 00:22:36.462 --rc geninfo_all_blocks=1 00:22:36.462 --rc geninfo_unexecuted_blocks=1 00:22:36.462 00:22:36.462 ' 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:36.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.462 --rc genhtml_branch_coverage=1 00:22:36.462 --rc genhtml_function_coverage=1 00:22:36.462 --rc genhtml_legend=1 00:22:36.462 --rc geninfo_all_blocks=1 00:22:36.462 --rc geninfo_unexecuted_blocks=1 00:22:36.462 00:22:36.462 ' 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:36.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.462 --rc genhtml_branch_coverage=1 00:22:36.462 --rc genhtml_function_coverage=1 00:22:36.462 --rc genhtml_legend=1 00:22:36.462 --rc geninfo_all_blocks=1 00:22:36.462 --rc geninfo_unexecuted_blocks=1 00:22:36.462 00:22:36.462 ' 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:36.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.462 --rc genhtml_branch_coverage=1 00:22:36.462 --rc genhtml_function_coverage=1 00:22:36.462 --rc genhtml_legend=1 00:22:36.462 --rc geninfo_all_blocks=1 00:22:36.462 --rc geninfo_unexecuted_blocks=1 00:22:36.462 00:22:36.462 ' 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.462 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:36.463 00:22:36.463 real 0m0.208s 00:22:36.463 user 0m0.127s 00:22:36.463 sys 0m0.093s 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:36.463 ************************************ 00:22:36.463 END TEST dma 00:22:36.463 ************************************ 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.463 ************************************ 00:22:36.463 START TEST nvmf_identify 00:22:36.463 ************************************ 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:36.463 * Looking for test storage... 00:22:36.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:36.463 17:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:36.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.722 --rc genhtml_branch_coverage=1 00:22:36.722 --rc genhtml_function_coverage=1 00:22:36.722 --rc genhtml_legend=1 00:22:36.722 --rc geninfo_all_blocks=1 00:22:36.722 --rc geninfo_unexecuted_blocks=1 00:22:36.722 00:22:36.722 ' 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:36.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.722 --rc genhtml_branch_coverage=1 00:22:36.722 --rc genhtml_function_coverage=1 00:22:36.722 --rc genhtml_legend=1 00:22:36.722 --rc geninfo_all_blocks=1 00:22:36.722 --rc geninfo_unexecuted_blocks=1 00:22:36.722 00:22:36.722 ' 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:36.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.722 --rc genhtml_branch_coverage=1 00:22:36.722 --rc genhtml_function_coverage=1 00:22:36.722 --rc genhtml_legend=1 00:22:36.722 --rc geninfo_all_blocks=1 00:22:36.722 --rc geninfo_unexecuted_blocks=1 00:22:36.722 00:22:36.722 ' 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:36.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.722 --rc genhtml_branch_coverage=1 00:22:36.722 --rc genhtml_function_coverage=1 00:22:36.722 --rc genhtml_legend=1 00:22:36.722 --rc geninfo_all_blocks=1 00:22:36.722 --rc geninfo_unexecuted_blocks=1 00:22:36.722 00:22:36.722 ' 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.722 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:36.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.723 17:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:42.106 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.106 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:42.107 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:42.107 Found net devices under 0000:af:00.0: cvl_0_0 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:42.107 Found net devices under 0000:af:00.1: cvl_0_1 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.107 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:22:42.365 00:22:42.365 --- 10.0.0.2 ping statistics --- 00:22:42.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.365 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:22:42.365 00:22:42.365 --- 10.0.0.1 ping statistics --- 00:22:42.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.365 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.365 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.624 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1983792 00:22:42.624 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:42.624 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:42.624 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1983792 00:22:42.624 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1983792 ']' 00:22:42.624 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.624 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.624 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.624 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.624 17:33:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.624 [2024-12-09 17:33:08.953678] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:22:42.624 [2024-12-09 17:33:08.953720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.624 [2024-12-09 17:33:09.031421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.624 [2024-12-09 17:33:09.073860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.624 [2024-12-09 17:33:09.073897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.624 [2024-12-09 17:33:09.073904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.624 [2024-12-09 17:33:09.073910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.624 [2024-12-09 17:33:09.073915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.624 [2024-12-09 17:33:09.075384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.624 [2024-12-09 17:33:09.075495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.624 [2024-12-09 17:33:09.075513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.624 [2024-12-09 17:33:09.075515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.884 [2024-12-09 17:33:09.189120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.884 Malloc0 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.884 [2024-12-09 17:33:09.298602] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.884 [ 00:22:42.884 { 00:22:42.884 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:42.884 "subtype": "Discovery", 00:22:42.884 "listen_addresses": [ 00:22:42.884 { 00:22:42.884 "trtype": "TCP", 00:22:42.884 "adrfam": "IPv4", 00:22:42.884 "traddr": "10.0.0.2", 00:22:42.884 "trsvcid": "4420" 00:22:42.884 } 00:22:42.884 ], 00:22:42.884 "allow_any_host": true, 00:22:42.884 "hosts": [] 00:22:42.884 }, 00:22:42.884 { 00:22:42.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.884 "subtype": "NVMe", 00:22:42.884 "listen_addresses": [ 00:22:42.884 { 00:22:42.884 "trtype": "TCP", 00:22:42.884 "adrfam": "IPv4", 00:22:42.884 "traddr": "10.0.0.2", 00:22:42.884 "trsvcid": "4420" 00:22:42.884 } 00:22:42.884 ], 00:22:42.884 "allow_any_host": true, 00:22:42.884 "hosts": [], 00:22:42.884 "serial_number": "SPDK00000000000001", 00:22:42.884 "model_number": "SPDK bdev Controller", 00:22:42.884 "max_namespaces": 32, 00:22:42.884 "min_cntlid": 1, 00:22:42.884 "max_cntlid": 65519, 00:22:42.884 "namespaces": [ 00:22:42.884 { 00:22:42.884 "nsid": 1, 00:22:42.884 "bdev_name": "Malloc0", 00:22:42.884 "name": "Malloc0", 00:22:42.884 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:42.884 "eui64": "ABCDEF0123456789", 00:22:42.884 "uuid": "5ffd0e04-12b6-4751-a6ea-feeb02e1c960" 00:22:42.884 } 00:22:42.884 ] 00:22:42.884 } 00:22:42.884 ] 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.884 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:42.884 [2024-12-09 17:33:09.351866] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:22:42.884 [2024-12-09 17:33:09.351914] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1983825 ] 00:22:42.884 [2024-12-09 17:33:09.389886] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:42.884 [2024-12-09 17:33:09.389937] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:42.884 [2024-12-09 17:33:09.389943] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:42.884 [2024-12-09 17:33:09.389956] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:42.884 [2024-12-09 17:33:09.389964] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:42.884 [2024-12-09 17:33:09.397395] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:42.884 [2024-12-09 17:33:09.397427] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1381690 0 00:22:42.884 [2024-12-09 17:33:09.405175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:42.884 [2024-12-09 17:33:09.405189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:42.884 [2024-12-09 17:33:09.405193] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:42.884 [2024-12-09 17:33:09.405199] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:42.884 [2024-12-09 17:33:09.405234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.884 [2024-12-09 17:33:09.405239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.884 [2024-12-09 17:33:09.405243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1381690) 00:22:42.884 [2024-12-09 17:33:09.405254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:42.884 [2024-12-09 17:33:09.405271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3100, cid 0, qid 0 00:22:42.884 [2024-12-09 17:33:09.413174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.884 [2024-12-09 17:33:09.413183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.884 [2024-12-09 17:33:09.413186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.884 [2024-12-09 17:33:09.413190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3100) on tqpair=0x1381690 00:22:42.884 [2024-12-09 17:33:09.413203] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:42.884 [2024-12-09 17:33:09.413209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:42.884 [2024-12-09 17:33:09.413214] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:42.884 [2024-12-09 17:33:09.413227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.884 [2024-12-09 17:33:09.413230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.884 [2024-12-09 17:33:09.413233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1381690) 00:22:42.884 [2024-12-09 17:33:09.413240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-12-09 17:33:09.413253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3100, cid 0, qid 0 00:22:42.884 [2024-12-09 17:33:09.413432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.884 [2024-12-09 17:33:09.413437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.884 [2024-12-09 17:33:09.413440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.884 [2024-12-09 17:33:09.413444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3100) on tqpair=0x1381690 00:22:42.884 [2024-12-09 17:33:09.413449] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:42.884 [2024-12-09 17:33:09.413456] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:42.884 [2024-12-09 17:33:09.413462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.884 [2024-12-09 17:33:09.413465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.884 [2024-12-09 17:33:09.413468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1381690) 00:22:42.884 [2024-12-09 17:33:09.413474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-12-09 17:33:09.413484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3100, cid 0, qid 0 00:22:42.884 [2024-12-09 17:33:09.413550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.884 [2024-12-09 17:33:09.413555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.884 [2024-12-09 17:33:09.413558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.884 [2024-12-09 17:33:09.413562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3100) on tqpair=0x1381690 00:22:42.884 [2024-12-09 17:33:09.413567] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:42.884 [2024-12-09 17:33:09.413574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:42.884 [2024-12-09 17:33:09.413582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.884 [2024-12-09 17:33:09.413586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.884 [2024-12-09 17:33:09.413589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1381690) 00:22:42.885 [2024-12-09 17:33:09.413594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-12-09 17:33:09.413604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3100, cid 0, qid 0 00:22:42.885 [2024-12-09 17:33:09.413661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.885 [2024-12-09 17:33:09.413667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.885 [2024-12-09 17:33:09.413670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.413673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3100) on tqpair=0x1381690 00:22:42.885 [2024-12-09 17:33:09.413678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:42.885 [2024-12-09 17:33:09.413685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.413689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.413692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1381690) 00:22:42.885 [2024-12-09 17:33:09.413697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-12-09 17:33:09.413707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3100, cid 0, qid 0 00:22:42.885 [2024-12-09 17:33:09.413772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.885 [2024-12-09 17:33:09.413778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.885 [2024-12-09 17:33:09.413780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.413784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3100) on tqpair=0x1381690 00:22:42.885 [2024-12-09 17:33:09.413788] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:42.885 [2024-12-09 17:33:09.413792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:42.885 [2024-12-09 17:33:09.413800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:42.885 [2024-12-09 17:33:09.413908] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:42.885 [2024-12-09 17:33:09.413912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:42.885 [2024-12-09 17:33:09.413920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.413923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.413926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1381690) 00:22:42.885 [2024-12-09 17:33:09.413931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-12-09 17:33:09.413941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3100, cid 0, qid 0 00:22:42.885 [2024-12-09 17:33:09.414002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.885 [2024-12-09 17:33:09.414008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.885 [2024-12-09 17:33:09.414011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.414014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3100) on tqpair=0x1381690 00:22:42.885 [2024-12-09 17:33:09.414023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:42.885 [2024-12-09 17:33:09.414031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.414034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.414037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1381690) 00:22:42.885 [2024-12-09 17:33:09.414042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-12-09 17:33:09.414051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3100, cid 0, qid 0 00:22:42.885 [2024-12-09 17:33:09.414110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.885 [2024-12-09 17:33:09.414116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.885 [2024-12-09 17:33:09.414119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.414122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3100) on tqpair=0x1381690 00:22:42.885 [2024-12-09 17:33:09.414126] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:42.885 [2024-12-09 17:33:09.414131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:42.885 [2024-12-09 17:33:09.414137] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:42.885 [2024-12-09 17:33:09.414144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:42.885 [2024-12-09 17:33:09.414152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.414155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1381690) 00:22:42.885 [2024-12-09 17:33:09.414161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-12-09 17:33:09.414180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3100, cid 0, qid 0 00:22:42.885 [2024-12-09 17:33:09.414278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.885 [2024-12-09 17:33:09.414284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.885 [2024-12-09 17:33:09.414287] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.414290] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1381690): datao=0, datal=4096, cccid=0 00:22:42.885 [2024-12-09 17:33:09.414295] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13e3100) on tqpair(0x1381690): expected_datao=0, payload_size=4096 00:22:42.885 [2024-12-09 17:33:09.414299] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.414314] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.885 [2024-12-09 17:33:09.414318] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.146 [2024-12-09 17:33:09.456302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.146 [2024-12-09 17:33:09.456313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.146 [2024-12-09 17:33:09.456316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.146 [2024-12-09 17:33:09.456320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3100) on tqpair=0x1381690 00:22:43.146 [2024-12-09 17:33:09.456327] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:43.146 [2024-12-09 17:33:09.456335] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:43.146 [2024-12-09 17:33:09.456342] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:43.146 [2024-12-09 17:33:09.456347] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:43.146 [2024-12-09 17:33:09.456352] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:43.146 [2024-12-09 17:33:09.456356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:43.146 [2024-12-09 17:33:09.456365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:43.146 [2024-12-09 17:33:09.456371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.146 [2024-12-09 17:33:09.456375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.146 [2024-12-09 17:33:09.456378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1381690) 00:22:43.146 [2024-12-09 17:33:09.456385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:43.146 [2024-12-09 17:33:09.456396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3100, cid 0, qid 0 00:22:43.146 [2024-12-09 17:33:09.456460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.147 [2024-12-09 17:33:09.456466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.147 [2024-12-09 17:33:09.456469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3100) on tqpair=0x1381690 00:22:43.147 [2024-12-09 17:33:09.456478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1381690) 00:22:43.147 [2024-12-09 17:33:09.456490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.147 [2024-12-09 17:33:09.456495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1381690) 00:22:43.147 [2024-12-09 17:33:09.456506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.147 [2024-12-09 17:33:09.456510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1381690) 00:22:43.147 [2024-12-09 17:33:09.456521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.147 [2024-12-09 17:33:09.456526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.147 [2024-12-09 17:33:09.456537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.147 [2024-12-09 17:33:09.456541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:43.147 [2024-12-09 17:33:09.456551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:43.147 [2024-12-09 17:33:09.456557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1381690) 00:22:43.147 [2024-12-09 17:33:09.456567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.147 [2024-12-09 17:33:09.456578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3100, cid 0, qid 0 00:22:43.147 [2024-12-09 17:33:09.456582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3280, cid 1, qid 0 00:22:43.147 [2024-12-09 17:33:09.456586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3400, cid 2, qid 0 00:22:43.147 [2024-12-09 17:33:09.456590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.147 [2024-12-09 17:33:09.456594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3700, cid 4, qid 0 00:22:43.147 [2024-12-09 17:33:09.456690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.147 [2024-12-09 17:33:09.456695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.147 [2024-12-09 17:33:09.456698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3700) on tqpair=0x1381690 00:22:43.147 [2024-12-09 17:33:09.456706] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:43.147 [2024-12-09 17:33:09.456710] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:43.147 [2024-12-09 17:33:09.456719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1381690) 00:22:43.147 [2024-12-09 17:33:09.456728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.147 [2024-12-09 17:33:09.456737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3700, cid 4, qid 0 00:22:43.147 [2024-12-09 17:33:09.456811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.147 [2024-12-09 17:33:09.456816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.147 [2024-12-09 17:33:09.456819] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456822] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1381690): datao=0, datal=4096, cccid=4 00:22:43.147 [2024-12-09 17:33:09.456826] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13e3700) on tqpair(0x1381690): expected_datao=0, payload_size=4096 00:22:43.147 [2024-12-09 17:33:09.456830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456835] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456839] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.147 [2024-12-09 17:33:09.456868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.147 [2024-12-09 17:33:09.456871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3700) on tqpair=0x1381690 00:22:43.147 [2024-12-09 17:33:09.456884] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:43.147 [2024-12-09 17:33:09.456905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1381690) 00:22:43.147 [2024-12-09 17:33:09.456914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.147 [2024-12-09 17:33:09.456922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.456928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1381690) 00:22:43.147 [2024-12-09 17:33:09.456933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.147 [2024-12-09 17:33:09.456946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3700, cid 4, qid 0 00:22:43.147 [2024-12-09 17:33:09.456951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3880, cid 5, qid 0 00:22:43.147 [2024-12-09 17:33:09.457051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.147 [2024-12-09 17:33:09.457057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.147 [2024-12-09 17:33:09.457060] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.457063] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1381690): datao=0, datal=1024, cccid=4 00:22:43.147 [2024-12-09 17:33:09.457066] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13e3700) on tqpair(0x1381690): expected_datao=0, payload_size=1024 00:22:43.147 [2024-12-09 17:33:09.457070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.457075] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.457078] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.457083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.147 [2024-12-09 17:33:09.457087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.147 [2024-12-09 17:33:09.457090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.457093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3880) on tqpair=0x1381690 00:22:43.147 [2024-12-09 17:33:09.501176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.147 [2024-12-09 17:33:09.501185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.147 [2024-12-09 17:33:09.501189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.501192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3700) on tqpair=0x1381690 00:22:43.147 [2024-12-09 17:33:09.501202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.501206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1381690) 00:22:43.147 [2024-12-09 17:33:09.501212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.147 [2024-12-09 17:33:09.501228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3700, cid 4, qid 0 00:22:43.147 [2024-12-09 17:33:09.501376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.147 [2024-12-09 17:33:09.501382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.147 [2024-12-09 17:33:09.501385] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.501388] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1381690): datao=0, datal=3072, cccid=4 00:22:43.147 [2024-12-09 17:33:09.501391] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13e3700) on tqpair(0x1381690): expected_datao=0, payload_size=3072 00:22:43.147 [2024-12-09 17:33:09.501395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.501410] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.501414] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.543276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.147 [2024-12-09 17:33:09.543285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.147 [2024-12-09 17:33:09.543288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.543291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3700) on tqpair=0x1381690 00:22:43.147 [2024-12-09 17:33:09.543303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.543306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1381690) 00:22:43.147 [2024-12-09 17:33:09.543312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.147 [2024-12-09 17:33:09.543326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3700, cid 4, qid 0 00:22:43.147 [2024-12-09 17:33:09.543398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.147 [2024-12-09 17:33:09.543403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.147 [2024-12-09 17:33:09.543406] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.147 [2024-12-09 17:33:09.543409] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1381690): datao=0, datal=8, cccid=4 00:22:43.147 [2024-12-09 17:33:09.543413] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13e3700) on tqpair(0x1381690): expected_datao=0, payload_size=8 00:22:43.148 [2024-12-09 17:33:09.543417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.148 [2024-12-09 17:33:09.543422] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.148 [2024-12-09 17:33:09.543425] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.148 [2024-12-09 17:33:09.585299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.148 [2024-12-09 17:33:09.585308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.148 [2024-12-09 17:33:09.585311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.148 [2024-12-09 17:33:09.585314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3700) on tqpair=0x1381690 00:22:43.148 ===================================================== 00:22:43.148 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:43.148 ===================================================== 00:22:43.148 Controller Capabilities/Features 00:22:43.148 ================================ 00:22:43.148 Vendor ID: 0000 00:22:43.148 Subsystem Vendor ID: 0000 00:22:43.148 Serial Number: .................... 00:22:43.148 Model Number: ........................................ 00:22:43.148 Firmware Version: 25.01 00:22:43.148 Recommended Arb Burst: 0 00:22:43.148 IEEE OUI Identifier: 00 00 00 00:22:43.148 Multi-path I/O 00:22:43.148 May have multiple subsystem ports: No 00:22:43.148 May have multiple controllers: No 00:22:43.148 Associated with SR-IOV VF: No 00:22:43.148 Max Data Transfer Size: 131072 00:22:43.148 Max Number of Namespaces: 0 00:22:43.148 Max Number of I/O Queues: 1024 00:22:43.148 NVMe Specification Version (VS): 1.3 00:22:43.148 NVMe Specification Version (Identify): 1.3 00:22:43.148 Maximum Queue Entries: 128 00:22:43.148 Contiguous Queues Required: Yes 00:22:43.148 Arbitration Mechanisms Supported 00:22:43.148 Weighted Round Robin: Not Supported 00:22:43.148 Vendor Specific: Not Supported 00:22:43.148 Reset Timeout: 15000 ms 00:22:43.148 Doorbell Stride: 4 bytes 00:22:43.148 NVM Subsystem Reset: Not Supported 00:22:43.148 Command Sets Supported 00:22:43.148 NVM Command Set: Supported 00:22:43.148 Boot Partition: Not Supported 00:22:43.148 Memory Page Size Minimum: 4096 bytes 00:22:43.148 Memory Page Size Maximum: 4096 bytes 00:22:43.148 Persistent Memory Region: Not Supported 00:22:43.148 Optional Asynchronous Events Supported 00:22:43.148 Namespace Attribute Notices: Not Supported 00:22:43.148 Firmware Activation Notices: Not Supported 00:22:43.148 ANA Change Notices: Not Supported 00:22:43.148 PLE Aggregate Log Change Notices: Not Supported 00:22:43.148 LBA Status Info Alert Notices: Not Supported 00:22:43.148 EGE Aggregate Log Change Notices: Not Supported 00:22:43.148 Normal NVM Subsystem Shutdown event: Not Supported 00:22:43.148 Zone Descriptor Change Notices: Not Supported 00:22:43.148 Discovery Log Change Notices: Supported 00:22:43.148 Controller Attributes 00:22:43.148 128-bit Host Identifier: Not Supported 00:22:43.148 Non-Operational Permissive Mode: Not Supported 00:22:43.148 NVM Sets: Not Supported 00:22:43.148 Read Recovery Levels: Not Supported 00:22:43.148 Endurance Groups: Not Supported 00:22:43.148 Predictable Latency Mode: Not Supported 00:22:43.148 Traffic Based Keep ALive: Not Supported 00:22:43.148 Namespace Granularity: Not Supported 00:22:43.148 SQ Associations: Not Supported 00:22:43.148 UUID List: Not Supported 00:22:43.148 Multi-Domain Subsystem: Not Supported 00:22:43.148 Fixed Capacity Management: Not Supported 00:22:43.148 Variable Capacity Management: Not Supported 00:22:43.148 Delete Endurance Group: Not Supported 00:22:43.148 Delete NVM Set: Not Supported 00:22:43.148 Extended LBA Formats Supported: Not Supported 00:22:43.148 Flexible Data Placement Supported: Not Supported 00:22:43.148 00:22:43.148 Controller Memory Buffer Support 00:22:43.148 ================================ 00:22:43.148 Supported: No 00:22:43.148 00:22:43.148 Persistent Memory Region Support 00:22:43.148 ================================ 00:22:43.148 Supported: No 00:22:43.148 00:22:43.148 Admin Command Set Attributes 00:22:43.148 ============================ 00:22:43.148 Security Send/Receive: Not Supported 00:22:43.148 Format NVM: Not Supported 00:22:43.148 Firmware Activate/Download: Not Supported 00:22:43.148 Namespace Management: Not Supported 00:22:43.148 Device Self-Test: Not Supported 00:22:43.148 Directives: Not Supported 00:22:43.148 NVMe-MI: Not Supported 00:22:43.148 Virtualization Management: Not Supported 00:22:43.148 Doorbell Buffer Config: Not Supported 00:22:43.148 Get LBA Status Capability: Not Supported 00:22:43.148 Command & Feature Lockdown Capability: Not Supported 00:22:43.148 Abort Command Limit: 1 00:22:43.148 Async Event Request Limit: 4 00:22:43.148 Number of Firmware Slots: N/A 00:22:43.148 Firmware Slot 1 Read-Only: N/A 00:22:43.148 Firmware Activation Without Reset: N/A 00:22:43.148 Multiple Update Detection Support: N/A 00:22:43.148 Firmware Update Granularity: No Information Provided 00:22:43.148 Per-Namespace SMART Log: No 00:22:43.148 Asymmetric Namespace Access Log Page: Not Supported 00:22:43.148 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:43.148 Command Effects Log Page: Not Supported 00:22:43.148 Get Log Page Extended Data: Supported 00:22:43.148 Telemetry Log Pages: Not Supported 00:22:43.148 Persistent Event Log Pages: Not Supported 00:22:43.148 Supported Log Pages Log Page: May Support 00:22:43.148 Commands Supported & Effects Log Page: Not Supported 00:22:43.148 Feature Identifiers & Effects Log Page:May Support 00:22:43.148 NVMe-MI Commands & Effects Log Page: May Support 00:22:43.148 Data Area 4 for Telemetry Log: Not Supported 00:22:43.148 Error Log Page Entries Supported: 128 00:22:43.148 Keep Alive: Not Supported 00:22:43.148 00:22:43.148 NVM Command Set Attributes 00:22:43.148 ========================== 00:22:43.148 Submission Queue Entry Size 00:22:43.148 Max: 1 00:22:43.148 Min: 1 00:22:43.148 Completion Queue Entry Size 00:22:43.148 Max: 1 00:22:43.148 Min: 1 00:22:43.148 Number of Namespaces: 0 00:22:43.148 Compare Command: Not Supported 00:22:43.148 Write Uncorrectable Command: Not Supported 00:22:43.148 Dataset Management Command: Not Supported 00:22:43.148 Write Zeroes Command: Not Supported 00:22:43.148 Set Features Save Field: Not Supported 00:22:43.148 Reservations: Not Supported 00:22:43.148 Timestamp: Not Supported 00:22:43.148 Copy: Not Supported 00:22:43.148 Volatile Write Cache: Not Present 00:22:43.148 Atomic Write Unit (Normal): 1 00:22:43.148 Atomic Write Unit (PFail): 1 00:22:43.148 Atomic Compare & Write Unit: 1 00:22:43.148 Fused Compare & Write: Supported 00:22:43.148 Scatter-Gather List 00:22:43.148 SGL Command Set: Supported 00:22:43.148 SGL Keyed: Supported 00:22:43.148 SGL Bit Bucket Descriptor: Not Supported 00:22:43.148 SGL Metadata Pointer: Not Supported 00:22:43.148 Oversized SGL: Not Supported 00:22:43.148 SGL Metadata Address: Not Supported 00:22:43.148 SGL Offset: Supported 00:22:43.148 Transport SGL Data Block: Not Supported 00:22:43.148 Replay Protected Memory Block: Not Supported 00:22:43.148 00:22:43.148 Firmware Slot Information 00:22:43.148 ========================= 00:22:43.148 Active slot: 0 00:22:43.148 00:22:43.148 00:22:43.148 Error Log 00:22:43.148 ========= 00:22:43.148 00:22:43.148 Active Namespaces 00:22:43.148 ================= 00:22:43.148 Discovery Log Page 00:22:43.148 ================== 00:22:43.148 Generation Counter: 2 00:22:43.148 Number of Records: 2 00:22:43.148 Record Format: 0 00:22:43.148 00:22:43.148 Discovery Log Entry 0 00:22:43.148 ---------------------- 00:22:43.148 Transport Type: 3 (TCP) 00:22:43.148 Address Family: 1 (IPv4) 00:22:43.148 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:43.148 Entry Flags: 00:22:43.148 Duplicate Returned Information: 1 00:22:43.148 Explicit Persistent Connection Support for Discovery: 1 00:22:43.148 Transport Requirements: 00:22:43.148 Secure Channel: Not Required 00:22:43.148 Port ID: 0 (0x0000) 00:22:43.148 Controller ID: 65535 (0xffff) 00:22:43.148 Admin Max SQ Size: 128 00:22:43.148 Transport Service Identifier: 4420 00:22:43.148 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:43.148 Transport Address: 10.0.0.2 00:22:43.148 Discovery Log Entry 1 00:22:43.148 ---------------------- 00:22:43.148 Transport Type: 3 (TCP) 00:22:43.148 Address Family: 1 (IPv4) 00:22:43.148 Subsystem Type: 2 (NVM Subsystem) 00:22:43.148 Entry Flags: 00:22:43.149 Duplicate Returned Information: 0 00:22:43.149 Explicit Persistent Connection Support for Discovery: 0 00:22:43.149 Transport Requirements: 00:22:43.149 Secure Channel: Not Required 00:22:43.149 Port ID: 0 (0x0000) 00:22:43.149 Controller ID: 65535 (0xffff) 00:22:43.149 Admin Max SQ Size: 128 00:22:43.149 Transport Service Identifier: 4420 00:22:43.149 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:43.149 Transport Address: 10.0.0.2 [2024-12-09 17:33:09.585397] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:43.149 [2024-12-09 17:33:09.585407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3100) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.585413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.149 [2024-12-09 17:33:09.585418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3280) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.585422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.149 [2024-12-09 17:33:09.585426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3400) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.585430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.149 [2024-12-09 17:33:09.585434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.585438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.149 [2024-12-09 17:33:09.585448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.149 [2024-12-09 17:33:09.585462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.149 [2024-12-09 17:33:09.585475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.149 [2024-12-09 17:33:09.585534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.149 [2024-12-09 17:33:09.585539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.149 [2024-12-09 17:33:09.585542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.585554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.149 [2024-12-09 17:33:09.585566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.149 [2024-12-09 17:33:09.585578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.149 [2024-12-09 17:33:09.585650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.149 [2024-12-09 17:33:09.585655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.149 [2024-12-09 17:33:09.585658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.585666] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:43.149 [2024-12-09 17:33:09.585670] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:43.149 [2024-12-09 17:33:09.585678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.149 [2024-12-09 17:33:09.585690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.149 [2024-12-09 17:33:09.585699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.149 [2024-12-09 17:33:09.585767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.149 [2024-12-09 17:33:09.585772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.149 [2024-12-09 17:33:09.585775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.585787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.149 [2024-12-09 17:33:09.585799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.149 [2024-12-09 17:33:09.585808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.149 [2024-12-09 17:33:09.585865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.149 [2024-12-09 17:33:09.585871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.149 [2024-12-09 17:33:09.585873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.585885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.149 [2024-12-09 17:33:09.585897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.149 [2024-12-09 17:33:09.585906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.149 [2024-12-09 17:33:09.585964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.149 [2024-12-09 17:33:09.585970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.149 [2024-12-09 17:33:09.585974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.585985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.585992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.149 [2024-12-09 17:33:09.585997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.149 [2024-12-09 17:33:09.586007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.149 [2024-12-09 17:33:09.586081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.149 [2024-12-09 17:33:09.586087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.149 [2024-12-09 17:33:09.586090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.586093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.586101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.586104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.586107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.149 [2024-12-09 17:33:09.586113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.149 [2024-12-09 17:33:09.586121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.149 [2024-12-09 17:33:09.586201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.149 [2024-12-09 17:33:09.586207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.149 [2024-12-09 17:33:09.586210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.586213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.586221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.586225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.586228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.149 [2024-12-09 17:33:09.586233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.149 [2024-12-09 17:33:09.586243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.149 [2024-12-09 17:33:09.586322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.149 [2024-12-09 17:33:09.586328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.149 [2024-12-09 17:33:09.586330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.586333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.586343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.586346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.586349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.149 [2024-12-09 17:33:09.586355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.149 [2024-12-09 17:33:09.586365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.149 [2024-12-09 17:33:09.586434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.149 [2024-12-09 17:33:09.586440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.149 [2024-12-09 17:33:09.586443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.586448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.149 [2024-12-09 17:33:09.586455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.586459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.149 [2024-12-09 17:33:09.586462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.149 [2024-12-09 17:33:09.586468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.149 [2024-12-09 17:33:09.586476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.149 [2024-12-09 17:33:09.586532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.149 [2024-12-09 17:33:09.586538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.150 [2024-12-09 17:33:09.586540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.150 [2024-12-09 17:33:09.586552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.150 [2024-12-09 17:33:09.586564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.150 [2024-12-09 17:33:09.586572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.150 [2024-12-09 17:33:09.586649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.150 [2024-12-09 17:33:09.586655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.150 [2024-12-09 17:33:09.586658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.150 [2024-12-09 17:33:09.586669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.150 [2024-12-09 17:33:09.586681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.150 [2024-12-09 17:33:09.586690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.150 [2024-12-09 17:33:09.586749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.150 [2024-12-09 17:33:09.586754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.150 [2024-12-09 17:33:09.586757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.150 [2024-12-09 17:33:09.586768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.150 [2024-12-09 17:33:09.586780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.150 [2024-12-09 17:33:09.586790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.150 [2024-12-09 17:33:09.586865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.150 [2024-12-09 17:33:09.586870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.150 [2024-12-09 17:33:09.586873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.150 [2024-12-09 17:33:09.586886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.150 [2024-12-09 17:33:09.586898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.150 [2024-12-09 17:33:09.586907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.150 [2024-12-09 17:33:09.586964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.150 [2024-12-09 17:33:09.586970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.150 [2024-12-09 17:33:09.586973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.150 [2024-12-09 17:33:09.586984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.586990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.150 [2024-12-09 17:33:09.586996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.150 [2024-12-09 17:33:09.587005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.150 [2024-12-09 17:33:09.587083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.150 [2024-12-09 17:33:09.587088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.150 [2024-12-09 17:33:09.587091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.587095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.150 [2024-12-09 17:33:09.587102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.587106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.587109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.150 [2024-12-09 17:33:09.587114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.150 [2024-12-09 17:33:09.587123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.150 [2024-12-09 17:33:09.591175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.150 [2024-12-09 17:33:09.591183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.150 [2024-12-09 17:33:09.591185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.591189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.150 [2024-12-09 17:33:09.591198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.591201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.591204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1381690) 00:22:43.150 [2024-12-09 17:33:09.591210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.150 [2024-12-09 17:33:09.591220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e3580, cid 3, qid 0 00:22:43.150 [2024-12-09 17:33:09.591372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.150 [2024-12-09 17:33:09.591378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.150 [2024-12-09 17:33:09.591381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.591384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e3580) on tqpair=0x1381690 00:22:43.150 [2024-12-09 17:33:09.591391] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:22:43.150 00:22:43.150 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:43.150 [2024-12-09 17:33:09.626727] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:22:43.150 [2024-12-09 17:33:09.626759] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1983827 ] 00:22:43.150 [2024-12-09 17:33:09.665464] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:43.150 [2024-12-09 17:33:09.665505] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:43.150 [2024-12-09 17:33:09.665510] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:43.150 [2024-12-09 17:33:09.665523] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:43.150 [2024-12-09 17:33:09.665531] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:43.150 [2024-12-09 17:33:09.669376] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:43.150 [2024-12-09 17:33:09.669405] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5b1690 0 00:22:43.150 [2024-12-09 17:33:09.676175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:43.150 [2024-12-09 17:33:09.676190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:43.150 [2024-12-09 17:33:09.676194] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:43.150 [2024-12-09 17:33:09.676197] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:43.150 [2024-12-09 17:33:09.676224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.676229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.676232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b1690) 00:22:43.150 [2024-12-09 17:33:09.676243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:43.150 [2024-12-09 17:33:09.676259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613100, cid 0, qid 0 00:22:43.150 [2024-12-09 17:33:09.684176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.150 [2024-12-09 17:33:09.684186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.150 [2024-12-09 17:33:09.684189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.684193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613100) on tqpair=0x5b1690 00:22:43.150 [2024-12-09 17:33:09.684202] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:43.150 [2024-12-09 17:33:09.684208] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:43.150 [2024-12-09 17:33:09.684212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:43.150 [2024-12-09 17:33:09.684224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.684228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.150 [2024-12-09 17:33:09.684231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b1690) 00:22:43.150 [2024-12-09 17:33:09.684238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.150 [2024-12-09 17:33:09.684254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613100, cid 0, qid 0 00:22:43.413 [2024-12-09 17:33:09.684334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.413 [2024-12-09 17:33:09.684341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.413 [2024-12-09 17:33:09.684346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.413 [2024-12-09 17:33:09.684350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613100) on tqpair=0x5b1690 00:22:43.413 [2024-12-09 17:33:09.684356] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:43.413 [2024-12-09 17:33:09.684362] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:43.413 [2024-12-09 17:33:09.684369] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.413 [2024-12-09 17:33:09.684372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.413 [2024-12-09 17:33:09.684376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b1690) 00:22:43.413 [2024-12-09 17:33:09.684383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.413 [2024-12-09 17:33:09.684393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613100, cid 0, qid 0 00:22:43.413 [2024-12-09 17:33:09.684454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.413 [2024-12-09 17:33:09.684461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.413 [2024-12-09 17:33:09.684464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.413 [2024-12-09 17:33:09.684467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613100) on tqpair=0x5b1690 00:22:43.413 [2024-12-09 17:33:09.684471] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:43.413 [2024-12-09 17:33:09.684478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:43.413 [2024-12-09 17:33:09.684484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.413 [2024-12-09 17:33:09.684488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.413 [2024-12-09 17:33:09.684491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b1690) 00:22:43.413 [2024-12-09 17:33:09.684496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.413 [2024-12-09 17:33:09.684505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613100, cid 0, qid 0 00:22:43.413 [2024-12-09 17:33:09.684564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.413 [2024-12-09 17:33:09.684570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.413 [2024-12-09 17:33:09.684573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.684576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613100) on tqpair=0x5b1690 00:22:43.414 [2024-12-09 17:33:09.684580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:43.414 [2024-12-09 17:33:09.684589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.684592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.684596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b1690) 00:22:43.414 [2024-12-09 17:33:09.684602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.414 [2024-12-09 17:33:09.684611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613100, cid 0, qid 0 00:22:43.414 [2024-12-09 17:33:09.684668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.414 [2024-12-09 17:33:09.684673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.414 [2024-12-09 17:33:09.684679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.684682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613100) on tqpair=0x5b1690 00:22:43.414 [2024-12-09 17:33:09.684686] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:43.414 [2024-12-09 17:33:09.684691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:43.414 [2024-12-09 17:33:09.684697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:43.414 [2024-12-09 17:33:09.684805] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:43.414 [2024-12-09 17:33:09.684809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:43.414 [2024-12-09 17:33:09.684816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.684819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.684822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b1690) 00:22:43.414 [2024-12-09 17:33:09.684828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.414 [2024-12-09 17:33:09.684838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613100, cid 0, qid 0 00:22:43.414 [2024-12-09 17:33:09.684896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.414 [2024-12-09 17:33:09.684902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.414 [2024-12-09 17:33:09.684905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.684908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613100) on tqpair=0x5b1690 00:22:43.414 [2024-12-09 17:33:09.684912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:43.414 [2024-12-09 17:33:09.684920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.684924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.684927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b1690) 00:22:43.414 [2024-12-09 17:33:09.684932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.414 [2024-12-09 17:33:09.684941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613100, cid 0, qid 0 00:22:43.414 [2024-12-09 17:33:09.685015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.414 [2024-12-09 17:33:09.685021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.414 [2024-12-09 17:33:09.685024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.685028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613100) on tqpair=0x5b1690 00:22:43.414 [2024-12-09 17:33:09.685032] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:43.414 [2024-12-09 17:33:09.685037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:43.414 [2024-12-09 17:33:09.685044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:43.414 [2024-12-09 17:33:09.685054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:43.414 [2024-12-09 17:33:09.685062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.685066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b1690) 00:22:43.414 [2024-12-09 17:33:09.685073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.414 [2024-12-09 17:33:09.685083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613100, cid 0, qid 0 00:22:43.414 [2024-12-09 17:33:09.685194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.414 [2024-12-09 17:33:09.685201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.414 [2024-12-09 17:33:09.685204] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.685208] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b1690): datao=0, datal=4096, cccid=0 00:22:43.414 [2024-12-09 17:33:09.685212] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x613100) on tqpair(0x5b1690): expected_datao=0, payload_size=4096 00:22:43.414 [2024-12-09 17:33:09.685216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.685227] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.685232] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.414 [2024-12-09 17:33:09.727314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.414 [2024-12-09 17:33:09.727317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613100) on tqpair=0x5b1690 00:22:43.414 [2024-12-09 17:33:09.727329] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:43.414 [2024-12-09 17:33:09.727341] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:43.414 [2024-12-09 17:33:09.727346] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:43.414 [2024-12-09 17:33:09.727350] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:43.414 [2024-12-09 17:33:09.727354] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:43.414 [2024-12-09 17:33:09.727358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:43.414 [2024-12-09 17:33:09.727367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:43.414 [2024-12-09 17:33:09.727374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b1690) 00:22:43.414 [2024-12-09 17:33:09.727388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:43.414 [2024-12-09 17:33:09.727400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613100, cid 0, qid 0 00:22:43.414 [2024-12-09 17:33:09.727459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.414 [2024-12-09 17:33:09.727465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.414 [2024-12-09 17:33:09.727468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613100) on tqpair=0x5b1690 00:22:43.414 [2024-12-09 17:33:09.727477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b1690) 00:22:43.414 [2024-12-09 17:33:09.727488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.414 [2024-12-09 17:33:09.727496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5b1690) 00:22:43.414 [2024-12-09 17:33:09.727508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.414 [2024-12-09 17:33:09.727513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5b1690) 00:22:43.414 [2024-12-09 17:33:09.727524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.414 [2024-12-09 17:33:09.727529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.414 [2024-12-09 17:33:09.727540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.414 [2024-12-09 17:33:09.727544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:43.414 [2024-12-09 17:33:09.727555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:43.414 [2024-12-09 17:33:09.727561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.414 [2024-12-09 17:33:09.727564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5b1690) 00:22:43.414 [2024-12-09 17:33:09.727570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.414 [2024-12-09 17:33:09.727581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613100, cid 0, qid 0 00:22:43.414 [2024-12-09 17:33:09.727585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613280, cid 1, qid 0 00:22:43.414 [2024-12-09 17:33:09.727589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613400, cid 2, qid 0 00:22:43.414 [2024-12-09 17:33:09.727593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.414 [2024-12-09 17:33:09.727597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613700, cid 4, qid 0 00:22:43.414 [2024-12-09 17:33:09.727692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.414 [2024-12-09 17:33:09.727698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.414 [2024-12-09 17:33:09.727701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.727704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613700) on tqpair=0x5b1690 00:22:43.415 [2024-12-09 17:33:09.727709] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:43.415 [2024-12-09 17:33:09.727713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.727721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.727727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.727732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.727736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.727741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5b1690) 00:22:43.415 [2024-12-09 17:33:09.727746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:43.415 [2024-12-09 17:33:09.727756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613700, cid 4, qid 0 00:22:43.415 [2024-12-09 17:33:09.727815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.415 [2024-12-09 17:33:09.727821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.415 [2024-12-09 17:33:09.727824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.727827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613700) on tqpair=0x5b1690 00:22:43.415 [2024-12-09 17:33:09.727881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.727890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.727897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.727900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5b1690) 00:22:43.415 [2024-12-09 17:33:09.727906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.415 [2024-12-09 17:33:09.727915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613700, cid 4, qid 0 00:22:43.415 [2024-12-09 17:33:09.727991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.415 [2024-12-09 17:33:09.727997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.415 [2024-12-09 17:33:09.728000] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.728003] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b1690): datao=0, datal=4096, cccid=4 00:22:43.415 [2024-12-09 17:33:09.728007] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x613700) on tqpair(0x5b1690): expected_datao=0, payload_size=4096 00:22:43.415 [2024-12-09 17:33:09.728011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.728017] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.728020] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.728044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.415 [2024-12-09 17:33:09.728049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.415 [2024-12-09 17:33:09.728052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.728056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613700) on tqpair=0x5b1690 00:22:43.415 [2024-12-09 17:33:09.728065] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:43.415 [2024-12-09 17:33:09.728076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.728085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.728092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.728096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5b1690) 00:22:43.415 [2024-12-09 17:33:09.728101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.415 [2024-12-09 17:33:09.728112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613700, cid 4, qid 0 00:22:43.415 [2024-12-09 17:33:09.732176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.415 [2024-12-09 17:33:09.732184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.415 [2024-12-09 17:33:09.732189] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732192] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b1690): datao=0, datal=4096, cccid=4 00:22:43.415 [2024-12-09 17:33:09.732197] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x613700) on tqpair(0x5b1690): expected_datao=0, payload_size=4096 00:22:43.415 [2024-12-09 17:33:09.732200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732206] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732209] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.415 [2024-12-09 17:33:09.732219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.415 [2024-12-09 17:33:09.732222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613700) on tqpair=0x5b1690 00:22:43.415 [2024-12-09 17:33:09.732239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.732248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.732254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5b1690) 00:22:43.415 [2024-12-09 17:33:09.732263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.415 [2024-12-09 17:33:09.732274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613700, cid 4, qid 0 00:22:43.415 [2024-12-09 17:33:09.732420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.415 [2024-12-09 17:33:09.732426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.415 [2024-12-09 17:33:09.732429] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732432] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b1690): datao=0, datal=4096, cccid=4 00:22:43.415 [2024-12-09 17:33:09.732436] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x613700) on tqpair(0x5b1690): expected_datao=0, payload_size=4096 00:22:43.415 [2024-12-09 17:33:09.732440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732445] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732448] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.415 [2024-12-09 17:33:09.732464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.415 [2024-12-09 17:33:09.732467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613700) on tqpair=0x5b1690 00:22:43.415 [2024-12-09 17:33:09.732476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.732483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.732491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.732498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.732503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.732510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.732514] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:43.415 [2024-12-09 17:33:09.732518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:43.415 [2024-12-09 17:33:09.732523] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:43.415 [2024-12-09 17:33:09.732536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5b1690) 00:22:43.415 [2024-12-09 17:33:09.732545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.415 [2024-12-09 17:33:09.732551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5b1690) 00:22:43.415 [2024-12-09 17:33:09.732562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.415 [2024-12-09 17:33:09.732575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613700, cid 4, qid 0 00:22:43.415 [2024-12-09 17:33:09.732580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613880, cid 5, qid 0 00:22:43.415 [2024-12-09 17:33:09.732654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.415 [2024-12-09 17:33:09.732660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.415 [2024-12-09 17:33:09.732663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613700) on tqpair=0x5b1690 00:22:43.415 [2024-12-09 17:33:09.732671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.415 [2024-12-09 17:33:09.732676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.415 [2024-12-09 17:33:09.732679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613880) on tqpair=0x5b1690 00:22:43.415 [2024-12-09 17:33:09.732690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.415 [2024-12-09 17:33:09.732694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5b1690) 00:22:43.415 [2024-12-09 17:33:09.732699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.415 [2024-12-09 17:33:09.732708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613880, cid 5, qid 0 00:22:43.415 [2024-12-09 17:33:09.732771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.415 [2024-12-09 17:33:09.732776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.415 [2024-12-09 17:33:09.732779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.732782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613880) on tqpair=0x5b1690 00:22:43.416 [2024-12-09 17:33:09.732790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.732793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5b1690) 00:22:43.416 [2024-12-09 17:33:09.732799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-12-09 17:33:09.732808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613880, cid 5, qid 0 00:22:43.416 [2024-12-09 17:33:09.732870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.416 [2024-12-09 17:33:09.732880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.416 [2024-12-09 17:33:09.732883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.732887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613880) on tqpair=0x5b1690 00:22:43.416 [2024-12-09 17:33:09.732894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.732898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5b1690) 00:22:43.416 [2024-12-09 17:33:09.732903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-12-09 17:33:09.732912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613880, cid 5, qid 0 00:22:43.416 [2024-12-09 17:33:09.732969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.416 [2024-12-09 17:33:09.732974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.416 [2024-12-09 17:33:09.732977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.732981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613880) on tqpair=0x5b1690 00:22:43.416 [2024-12-09 17:33:09.732993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.732997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5b1690) 00:22:43.416 [2024-12-09 17:33:09.733002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-12-09 17:33:09.733008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5b1690) 00:22:43.416 [2024-12-09 17:33:09.733017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-12-09 17:33:09.733023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x5b1690) 00:22:43.416 [2024-12-09 17:33:09.733032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-12-09 17:33:09.733038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5b1690) 00:22:43.416 [2024-12-09 17:33:09.733046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-12-09 17:33:09.733056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613880, cid 5, qid 0 00:22:43.416 [2024-12-09 17:33:09.733060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613700, cid 4, qid 0 00:22:43.416 [2024-12-09 17:33:09.733065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613a00, cid 6, qid 0 00:22:43.416 [2024-12-09 17:33:09.733069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613b80, cid 7, qid 0 00:22:43.416 [2024-12-09 17:33:09.733204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.416 [2024-12-09 17:33:09.733210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.416 [2024-12-09 17:33:09.733214] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733217] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b1690): datao=0, datal=8192, cccid=5 00:22:43.416 [2024-12-09 17:33:09.733221] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x613880) on tqpair(0x5b1690): expected_datao=0, payload_size=8192 00:22:43.416 [2024-12-09 17:33:09.733224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733239] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733243] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.416 [2024-12-09 17:33:09.733253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.416 [2024-12-09 17:33:09.733255] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733258] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b1690): datao=0, datal=512, cccid=4 00:22:43.416 [2024-12-09 17:33:09.733262] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x613700) on tqpair(0x5b1690): expected_datao=0, payload_size=512 00:22:43.416 [2024-12-09 17:33:09.733266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733271] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733274] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.416 [2024-12-09 17:33:09.733284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.416 [2024-12-09 17:33:09.733287] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733290] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b1690): datao=0, datal=512, cccid=6 00:22:43.416 [2024-12-09 17:33:09.733293] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x613a00) on tqpair(0x5b1690): expected_datao=0, payload_size=512 00:22:43.416 [2024-12-09 17:33:09.733297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733302] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733306] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.416 [2024-12-09 17:33:09.733315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.416 [2024-12-09 17:33:09.733318] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733321] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b1690): datao=0, datal=4096, cccid=7 00:22:43.416 [2024-12-09 17:33:09.733325] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x613b80) on tqpair(0x5b1690): expected_datao=0, payload_size=4096 00:22:43.416 [2024-12-09 17:33:09.733328] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733339] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733342] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.416 [2024-12-09 17:33:09.733353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.416 [2024-12-09 17:33:09.733356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613880) on tqpair=0x5b1690 00:22:43.416 [2024-12-09 17:33:09.733371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.416 [2024-12-09 17:33:09.733376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.416 [2024-12-09 17:33:09.733379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613700) on tqpair=0x5b1690 00:22:43.416 [2024-12-09 17:33:09.733391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.416 [2024-12-09 17:33:09.733396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.416 [2024-12-09 17:33:09.733399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613a00) on tqpair=0x5b1690 00:22:43.416 [2024-12-09 17:33:09.733408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.416 [2024-12-09 17:33:09.733415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.416 [2024-12-09 17:33:09.733418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.416 [2024-12-09 17:33:09.733421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613b80) on tqpair=0x5b1690 00:22:43.416 ===================================================== 00:22:43.416 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.416 ===================================================== 00:22:43.416 Controller Capabilities/Features 00:22:43.416 ================================ 00:22:43.416 Vendor ID: 8086 00:22:43.416 Subsystem Vendor ID: 8086 00:22:43.416 Serial Number: SPDK00000000000001 00:22:43.416 Model Number: SPDK bdev Controller 00:22:43.416 Firmware Version: 25.01 00:22:43.416 Recommended Arb Burst: 6 00:22:43.416 IEEE OUI Identifier: e4 d2 5c 00:22:43.416 Multi-path I/O 00:22:43.416 May have multiple subsystem ports: Yes 00:22:43.416 May have multiple controllers: Yes 00:22:43.416 Associated with SR-IOV VF: No 00:22:43.416 Max Data Transfer Size: 131072 00:22:43.416 Max Number of Namespaces: 32 00:22:43.416 Max Number of I/O Queues: 127 00:22:43.416 NVMe Specification Version (VS): 1.3 00:22:43.416 NVMe Specification Version (Identify): 1.3 00:22:43.416 Maximum Queue Entries: 128 00:22:43.416 Contiguous Queues Required: Yes 00:22:43.416 Arbitration Mechanisms Supported 00:22:43.416 Weighted Round Robin: Not Supported 00:22:43.416 Vendor Specific: Not Supported 00:22:43.416 Reset Timeout: 15000 ms 00:22:43.416 Doorbell Stride: 4 bytes 00:22:43.416 NVM Subsystem Reset: Not Supported 00:22:43.416 Command Sets Supported 00:22:43.416 NVM Command Set: Supported 00:22:43.416 Boot Partition: Not Supported 00:22:43.416 Memory Page Size Minimum: 4096 bytes 00:22:43.416 Memory Page Size Maximum: 4096 bytes 00:22:43.416 Persistent Memory Region: Not Supported 00:22:43.416 Optional Asynchronous Events Supported 00:22:43.416 Namespace Attribute Notices: Supported 00:22:43.416 Firmware Activation Notices: Not Supported 00:22:43.416 ANA Change Notices: Not Supported 00:22:43.416 PLE Aggregate Log Change Notices: Not Supported 00:22:43.416 LBA Status Info Alert Notices: Not Supported 00:22:43.416 EGE Aggregate Log Change Notices: Not Supported 00:22:43.416 Normal NVM Subsystem Shutdown event: Not Supported 00:22:43.416 Zone Descriptor Change Notices: Not Supported 00:22:43.416 Discovery Log Change Notices: Not Supported 00:22:43.417 Controller Attributes 00:22:43.417 128-bit Host Identifier: Supported 00:22:43.417 Non-Operational Permissive Mode: Not Supported 00:22:43.417 NVM Sets: Not Supported 00:22:43.417 Read Recovery Levels: Not Supported 00:22:43.417 Endurance Groups: Not Supported 00:22:43.417 Predictable Latency Mode: Not Supported 00:22:43.417 Traffic Based Keep ALive: Not Supported 00:22:43.417 Namespace Granularity: Not Supported 00:22:43.417 SQ Associations: Not Supported 00:22:43.417 UUID List: Not Supported 00:22:43.417 Multi-Domain Subsystem: Not Supported 00:22:43.417 Fixed Capacity Management: Not Supported 00:22:43.417 Variable Capacity Management: Not Supported 00:22:43.417 Delete Endurance Group: Not Supported 00:22:43.417 Delete NVM Set: Not Supported 00:22:43.417 Extended LBA Formats Supported: Not Supported 00:22:43.417 Flexible Data Placement Supported: Not Supported 00:22:43.417 00:22:43.417 Controller Memory Buffer Support 00:22:43.417 ================================ 00:22:43.417 Supported: No 00:22:43.417 00:22:43.417 Persistent Memory Region Support 00:22:43.417 ================================ 00:22:43.417 Supported: No 00:22:43.417 00:22:43.417 Admin Command Set Attributes 00:22:43.417 ============================ 00:22:43.417 Security Send/Receive: Not Supported 00:22:43.417 Format NVM: Not Supported 00:22:43.417 Firmware Activate/Download: Not Supported 00:22:43.417 Namespace Management: Not Supported 00:22:43.417 Device Self-Test: Not Supported 00:22:43.417 Directives: Not Supported 00:22:43.417 NVMe-MI: Not Supported 00:22:43.417 Virtualization Management: Not Supported 00:22:43.417 Doorbell Buffer Config: Not Supported 00:22:43.417 Get LBA Status Capability: Not Supported 00:22:43.417 Command & Feature Lockdown Capability: Not Supported 00:22:43.417 Abort Command Limit: 4 00:22:43.417 Async Event Request Limit: 4 00:22:43.417 Number of Firmware Slots: N/A 00:22:43.417 Firmware Slot 1 Read-Only: N/A 00:22:43.417 Firmware Activation Without Reset: N/A 00:22:43.417 Multiple Update Detection Support: N/A 00:22:43.417 Firmware Update Granularity: No Information Provided 00:22:43.417 Per-Namespace SMART Log: No 00:22:43.417 Asymmetric Namespace Access Log Page: Not Supported 00:22:43.417 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:43.417 Command Effects Log Page: Supported 00:22:43.417 Get Log Page Extended Data: Supported 00:22:43.417 Telemetry Log Pages: Not Supported 00:22:43.417 Persistent Event Log Pages: Not Supported 00:22:43.417 Supported Log Pages Log Page: May Support 00:22:43.417 Commands Supported & Effects Log Page: Not Supported 00:22:43.417 Feature Identifiers & Effects Log Page:May Support 00:22:43.417 NVMe-MI Commands & Effects Log Page: May Support 00:22:43.417 Data Area 4 for Telemetry Log: Not Supported 00:22:43.417 Error Log Page Entries Supported: 128 00:22:43.417 Keep Alive: Supported 00:22:43.417 Keep Alive Granularity: 10000 ms 00:22:43.417 00:22:43.417 NVM Command Set Attributes 00:22:43.417 ========================== 00:22:43.417 Submission Queue Entry Size 00:22:43.417 Max: 64 00:22:43.417 Min: 64 00:22:43.417 Completion Queue Entry Size 00:22:43.417 Max: 16 00:22:43.417 Min: 16 00:22:43.417 Number of Namespaces: 32 00:22:43.417 Compare Command: Supported 00:22:43.417 Write Uncorrectable Command: Not Supported 00:22:43.417 Dataset Management Command: Supported 00:22:43.417 Write Zeroes Command: Supported 00:22:43.417 Set Features Save Field: Not Supported 00:22:43.417 Reservations: Supported 00:22:43.417 Timestamp: Not Supported 00:22:43.417 Copy: Supported 00:22:43.417 Volatile Write Cache: Present 00:22:43.417 Atomic Write Unit (Normal): 1 00:22:43.417 Atomic Write Unit (PFail): 1 00:22:43.417 Atomic Compare & Write Unit: 1 00:22:43.417 Fused Compare & Write: Supported 00:22:43.417 Scatter-Gather List 00:22:43.417 SGL Command Set: Supported 00:22:43.417 SGL Keyed: Supported 00:22:43.417 SGL Bit Bucket Descriptor: Not Supported 00:22:43.417 SGL Metadata Pointer: Not Supported 00:22:43.417 Oversized SGL: Not Supported 00:22:43.417 SGL Metadata Address: Not Supported 00:22:43.417 SGL Offset: Supported 00:22:43.417 Transport SGL Data Block: Not Supported 00:22:43.417 Replay Protected Memory Block: Not Supported 00:22:43.417 00:22:43.417 Firmware Slot Information 00:22:43.417 ========================= 00:22:43.417 Active slot: 1 00:22:43.417 Slot 1 Firmware Revision: 25.01 00:22:43.417 00:22:43.417 00:22:43.417 Commands Supported and Effects 00:22:43.417 ============================== 00:22:43.417 Admin Commands 00:22:43.417 -------------- 00:22:43.417 Get Log Page (02h): Supported 00:22:43.417 Identify (06h): Supported 00:22:43.417 Abort (08h): Supported 00:22:43.417 Set Features (09h): Supported 00:22:43.417 Get Features (0Ah): Supported 00:22:43.417 Asynchronous Event Request (0Ch): Supported 00:22:43.417 Keep Alive (18h): Supported 00:22:43.417 I/O Commands 00:22:43.417 ------------ 00:22:43.417 Flush (00h): Supported LBA-Change 00:22:43.417 Write (01h): Supported LBA-Change 00:22:43.417 Read (02h): Supported 00:22:43.417 Compare (05h): Supported 00:22:43.417 Write Zeroes (08h): Supported LBA-Change 00:22:43.417 Dataset Management (09h): Supported LBA-Change 00:22:43.417 Copy (19h): Supported LBA-Change 00:22:43.417 00:22:43.417 Error Log 00:22:43.417 ========= 00:22:43.417 00:22:43.417 Arbitration 00:22:43.417 =========== 00:22:43.417 Arbitration Burst: 1 00:22:43.417 00:22:43.417 Power Management 00:22:43.417 ================ 00:22:43.417 Number of Power States: 1 00:22:43.417 Current Power State: Power State #0 00:22:43.417 Power State #0: 00:22:43.417 Max Power: 0.00 W 00:22:43.417 Non-Operational State: Operational 00:22:43.417 Entry Latency: Not Reported 00:22:43.417 Exit Latency: Not Reported 00:22:43.417 Relative Read Throughput: 0 00:22:43.417 Relative Read Latency: 0 00:22:43.417 Relative Write Throughput: 0 00:22:43.417 Relative Write Latency: 0 00:22:43.417 Idle Power: Not Reported 00:22:43.417 Active Power: Not Reported 00:22:43.417 Non-Operational Permissive Mode: Not Supported 00:22:43.417 00:22:43.417 Health Information 00:22:43.417 ================== 00:22:43.417 Critical Warnings: 00:22:43.417 Available Spare Space: OK 00:22:43.417 Temperature: OK 00:22:43.417 Device Reliability: OK 00:22:43.417 Read Only: No 00:22:43.417 Volatile Memory Backup: OK 00:22:43.417 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:43.417 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:43.417 Available Spare: 0% 00:22:43.417 Available Spare Threshold: 0% 00:22:43.417 Life Percentage Used:[2024-12-09 17:33:09.733505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.417 [2024-12-09 17:33:09.733509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5b1690) 00:22:43.417 [2024-12-09 17:33:09.733515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-12-09 17:33:09.733526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613b80, cid 7, qid 0 00:22:43.417 [2024-12-09 17:33:09.733594] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.417 [2024-12-09 17:33:09.733599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.417 [2024-12-09 17:33:09.733602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.417 [2024-12-09 17:33:09.733606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613b80) on tqpair=0x5b1690 00:22:43.417 [2024-12-09 17:33:09.733637] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:43.417 [2024-12-09 17:33:09.733646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613100) on tqpair=0x5b1690 00:22:43.417 [2024-12-09 17:33:09.733652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-12-09 17:33:09.733656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613280) on tqpair=0x5b1690 00:22:43.417 [2024-12-09 17:33:09.733660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-12-09 17:33:09.733665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613400) on tqpair=0x5b1690 00:22:43.417 [2024-12-09 17:33:09.733669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-12-09 17:33:09.733673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.417 [2024-12-09 17:33:09.733677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-12-09 17:33:09.733684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.417 [2024-12-09 17:33:09.733687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.417 [2024-12-09 17:33:09.733690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.417 [2024-12-09 17:33:09.733696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-12-09 17:33:09.733707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.417 [2024-12-09 17:33:09.733772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.417 [2024-12-09 17:33:09.733777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.417 [2024-12-09 17:33:09.733780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.733784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.418 [2024-12-09 17:33:09.733789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.733792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.733795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.418 [2024-12-09 17:33:09.733801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-12-09 17:33:09.733812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.418 [2024-12-09 17:33:09.733878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.418 [2024-12-09 17:33:09.733883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.418 [2024-12-09 17:33:09.733886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.733890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.418 [2024-12-09 17:33:09.733894] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:43.418 [2024-12-09 17:33:09.733898] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:43.418 [2024-12-09 17:33:09.733907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.733910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.733913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.418 [2024-12-09 17:33:09.733919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-12-09 17:33:09.733928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.418 [2024-12-09 17:33:09.733992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.418 [2024-12-09 17:33:09.733997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.418 [2024-12-09 17:33:09.734000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.418 [2024-12-09 17:33:09.734012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.418 [2024-12-09 17:33:09.734024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-12-09 17:33:09.734033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.418 [2024-12-09 17:33:09.734101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.418 [2024-12-09 17:33:09.734107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.418 [2024-12-09 17:33:09.734110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.418 [2024-12-09 17:33:09.734122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.418 [2024-12-09 17:33:09.734134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-12-09 17:33:09.734143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.418 [2024-12-09 17:33:09.734201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.418 [2024-12-09 17:33:09.734208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.418 [2024-12-09 17:33:09.734211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.418 [2024-12-09 17:33:09.734223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.418 [2024-12-09 17:33:09.734235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-12-09 17:33:09.734246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.418 [2024-12-09 17:33:09.734314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.418 [2024-12-09 17:33:09.734319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.418 [2024-12-09 17:33:09.734322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.418 [2024-12-09 17:33:09.734335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.418 [2024-12-09 17:33:09.734347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-12-09 17:33:09.734357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.418 [2024-12-09 17:33:09.734428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.418 [2024-12-09 17:33:09.734434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.418 [2024-12-09 17:33:09.734437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.418 [2024-12-09 17:33:09.734448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.418 [2024-12-09 17:33:09.734460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-12-09 17:33:09.734469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.418 [2024-12-09 17:33:09.734537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.418 [2024-12-09 17:33:09.734543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.418 [2024-12-09 17:33:09.734546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.418 [2024-12-09 17:33:09.734558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.418 [2024-12-09 17:33:09.734570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-12-09 17:33:09.734580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.418 [2024-12-09 17:33:09.734643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.418 [2024-12-09 17:33:09.734649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.418 [2024-12-09 17:33:09.734652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.418 [2024-12-09 17:33:09.734663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.418 [2024-12-09 17:33:09.734675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-12-09 17:33:09.734684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.418 [2024-12-09 17:33:09.734743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.418 [2024-12-09 17:33:09.734749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.418 [2024-12-09 17:33:09.734752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.418 [2024-12-09 17:33:09.734764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.418 [2024-12-09 17:33:09.734776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-12-09 17:33:09.734785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.418 [2024-12-09 17:33:09.734846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.418 [2024-12-09 17:33:09.734852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.418 [2024-12-09 17:33:09.734855] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.418 [2024-12-09 17:33:09.734866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.418 [2024-12-09 17:33:09.734872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.419 [2024-12-09 17:33:09.734878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-12-09 17:33:09.734887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.419 [2024-12-09 17:33:09.734954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.419 [2024-12-09 17:33:09.734960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.419 [2024-12-09 17:33:09.734963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.734966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.419 [2024-12-09 17:33:09.734975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.734978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.734981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.419 [2024-12-09 17:33:09.734987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-12-09 17:33:09.734997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.419 [2024-12-09 17:33:09.735058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.419 [2024-12-09 17:33:09.735064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.419 [2024-12-09 17:33:09.735067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.419 [2024-12-09 17:33:09.735079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.419 [2024-12-09 17:33:09.735091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-12-09 17:33:09.735100] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.419 [2024-12-09 17:33:09.735173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.419 [2024-12-09 17:33:09.735180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.419 [2024-12-09 17:33:09.735183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.419 [2024-12-09 17:33:09.735194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.419 [2024-12-09 17:33:09.735207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-12-09 17:33:09.735216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.419 [2024-12-09 17:33:09.735277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.419 [2024-12-09 17:33:09.735282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.419 [2024-12-09 17:33:09.735285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.419 [2024-12-09 17:33:09.735297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.419 [2024-12-09 17:33:09.735309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-12-09 17:33:09.735318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.419 [2024-12-09 17:33:09.735379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.419 [2024-12-09 17:33:09.735385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.419 [2024-12-09 17:33:09.735388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.419 [2024-12-09 17:33:09.735399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.419 [2024-12-09 17:33:09.735412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-12-09 17:33:09.735420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.419 [2024-12-09 17:33:09.735481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.419 [2024-12-09 17:33:09.735486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.419 [2024-12-09 17:33:09.735489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.419 [2024-12-09 17:33:09.735500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.419 [2024-12-09 17:33:09.735513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-12-09 17:33:09.735522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.419 [2024-12-09 17:33:09.735579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.419 [2024-12-09 17:33:09.735585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.419 [2024-12-09 17:33:09.735590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.419 [2024-12-09 17:33:09.735601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.419 [2024-12-09 17:33:09.735614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-12-09 17:33:09.735623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.419 [2024-12-09 17:33:09.735688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.419 [2024-12-09 17:33:09.735694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.419 [2024-12-09 17:33:09.735697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.419 [2024-12-09 17:33:09.735708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.419 [2024-12-09 17:33:09.735720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-12-09 17:33:09.735729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.419 [2024-12-09 17:33:09.735789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.419 [2024-12-09 17:33:09.735794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.419 [2024-12-09 17:33:09.735797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.419 [2024-12-09 17:33:09.735809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.419 [2024-12-09 17:33:09.735821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-12-09 17:33:09.735830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.419 [2024-12-09 17:33:09.735890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.419 [2024-12-09 17:33:09.735895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.419 [2024-12-09 17:33:09.735898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.419 [2024-12-09 17:33:09.735910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.735916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.419 [2024-12-09 17:33:09.735922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-12-09 17:33:09.735931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.419 [2024-12-09 17:33:09.735988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.419 [2024-12-09 17:33:09.735993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.419 [2024-12-09 17:33:09.735996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.736000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.419 [2024-12-09 17:33:09.736009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.736013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.736016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.419 [2024-12-09 17:33:09.736022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-12-09 17:33:09.736031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.419 [2024-12-09 17:33:09.736090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.419 [2024-12-09 17:33:09.736096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.419 [2024-12-09 17:33:09.736099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.736102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.419 [2024-12-09 17:33:09.736110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.736113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.419 [2024-12-09 17:33:09.736116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.420 [2024-12-09 17:33:09.736122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-12-09 17:33:09.736131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.420 [2024-12-09 17:33:09.740174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.420 [2024-12-09 17:33:09.740183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.420 [2024-12-09 17:33:09.740185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.420 [2024-12-09 17:33:09.740189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.420 [2024-12-09 17:33:09.740199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.420 [2024-12-09 17:33:09.740203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.420 [2024-12-09 17:33:09.740206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b1690) 00:22:43.420 [2024-12-09 17:33:09.740211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-12-09 17:33:09.740222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x613580, cid 3, qid 0 00:22:43.420 [2024-12-09 17:33:09.740294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.420 [2024-12-09 17:33:09.740300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.420 [2024-12-09 17:33:09.740303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.420 [2024-12-09 17:33:09.740306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x613580) on tqpair=0x5b1690 00:22:43.420 [2024-12-09 17:33:09.740313] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:22:43.420 0% 00:22:43.420 Data Units Read: 0 00:22:43.420 Data Units Written: 0 00:22:43.420 Host Read Commands: 0 00:22:43.420 Host Write Commands: 0 00:22:43.420 Controller Busy Time: 0 minutes 00:22:43.420 Power Cycles: 0 00:22:43.420 Power On Hours: 0 hours 00:22:43.420 Unsafe Shutdowns: 0 00:22:43.420 Unrecoverable Media Errors: 0 00:22:43.420 Lifetime Error Log Entries: 0 00:22:43.420 Warning Temperature Time: 0 minutes 00:22:43.420 Critical Temperature Time: 0 minutes 00:22:43.420 00:22:43.420 Number of Queues 00:22:43.420 ================ 00:22:43.420 Number of I/O Submission Queues: 127 00:22:43.420 Number of I/O Completion Queues: 127 00:22:43.420 00:22:43.420 Active Namespaces 00:22:43.420 ================= 00:22:43.420 Namespace ID:1 00:22:43.420 Error Recovery Timeout: Unlimited 00:22:43.420 Command Set Identifier: NVM (00h) 00:22:43.420 Deallocate: Supported 00:22:43.420 Deallocated/Unwritten Error: Not Supported 00:22:43.420 Deallocated Read Value: Unknown 00:22:43.420 Deallocate in Write Zeroes: Not Supported 00:22:43.420 Deallocated Guard Field: 0xFFFF 00:22:43.420 Flush: Supported 00:22:43.420 Reservation: Supported 00:22:43.420 Namespace Sharing Capabilities: Multiple Controllers 00:22:43.420 Size (in LBAs): 131072 (0GiB) 00:22:43.420 Capacity (in LBAs): 131072 (0GiB) 00:22:43.420 Utilization (in LBAs): 131072 (0GiB) 00:22:43.420 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:43.420 EUI64: ABCDEF0123456789 00:22:43.420 UUID: 5ffd0e04-12b6-4751-a6ea-feeb02e1c960 00:22:43.420 Thin Provisioning: Not Supported 00:22:43.420 Per-NS Atomic Units: Yes 00:22:43.420 Atomic Boundary Size (Normal): 0 00:22:43.420 Atomic Boundary Size (PFail): 0 00:22:43.420 Atomic Boundary Offset: 0 00:22:43.420 Maximum Single Source Range Length: 65535 00:22:43.420 Maximum Copy Length: 65535 00:22:43.420 Maximum Source Range Count: 1 00:22:43.420 NGUID/EUI64 Never Reused: No 00:22:43.420 Namespace Write Protected: No 00:22:43.420 Number of LBA Formats: 1 00:22:43.420 Current LBA Format: LBA Format #00 00:22:43.420 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:43.420 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:43.420 rmmod nvme_tcp 00:22:43.420 rmmod nvme_fabrics 00:22:43.420 rmmod nvme_keyring 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1983792 ']' 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1983792 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1983792 ']' 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1983792 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1983792 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1983792' 00:22:43.420 killing process with pid 1983792 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1983792 00:22:43.420 17:33:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1983792 00:22:43.679 17:33:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:43.679 17:33:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:43.679 17:33:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:43.679 17:33:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:43.679 17:33:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:43.679 17:33:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:43.679 17:33:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:43.679 17:33:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:43.679 17:33:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:43.679 17:33:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.679 17:33:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.679 17:33:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.216 00:22:46.216 real 0m9.313s 00:22:46.216 user 0m5.649s 00:22:46.216 sys 0m4.786s 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:46.216 ************************************ 00:22:46.216 END TEST nvmf_identify 00:22:46.216 ************************************ 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.216 ************************************ 00:22:46.216 START TEST nvmf_perf 00:22:46.216 ************************************ 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:46.216 * Looking for test storage... 00:22:46.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:46.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.216 --rc genhtml_branch_coverage=1 00:22:46.216 --rc genhtml_function_coverage=1 00:22:46.216 --rc genhtml_legend=1 00:22:46.216 --rc geninfo_all_blocks=1 00:22:46.216 --rc geninfo_unexecuted_blocks=1 00:22:46.216 00:22:46.216 ' 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:46.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.216 --rc genhtml_branch_coverage=1 00:22:46.216 --rc genhtml_function_coverage=1 00:22:46.216 --rc genhtml_legend=1 00:22:46.216 --rc geninfo_all_blocks=1 00:22:46.216 --rc geninfo_unexecuted_blocks=1 00:22:46.216 00:22:46.216 ' 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:46.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.216 --rc genhtml_branch_coverage=1 00:22:46.216 --rc genhtml_function_coverage=1 00:22:46.216 --rc genhtml_legend=1 00:22:46.216 --rc geninfo_all_blocks=1 00:22:46.216 --rc geninfo_unexecuted_blocks=1 00:22:46.216 00:22:46.216 ' 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:46.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.216 --rc genhtml_branch_coverage=1 00:22:46.216 --rc genhtml_function_coverage=1 00:22:46.216 --rc genhtml_legend=1 00:22:46.216 --rc geninfo_all_blocks=1 00:22:46.216 --rc geninfo_unexecuted_blocks=1 00:22:46.216 00:22:46.216 ' 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.216 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:46.217 17:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.490 17:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:51.490 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:51.490 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.490 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.490 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.490 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.490 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.490 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.490 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:51.490 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:51.490 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.490 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.490 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:51.491 Found net devices under 0000:af:00.0: cvl_0_0 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:51.491 Found net devices under 0000:af:00.1: cvl_0_1 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.491 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.750 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.750 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.750 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.750 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:52.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:22:52.009 00:22:52.009 --- 10.0.0.2 ping statistics --- 00:22:52.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.009 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:22:52.009 00:22:52.009 --- 10.0.0.1 ping statistics --- 00:22:52.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.009 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1987420 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1987420 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1987420 ']' 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.009 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.010 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.010 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:52.010 [2024-12-09 17:33:18.438494] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:22:52.010 [2024-12-09 17:33:18.438545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.010 [2024-12-09 17:33:18.515555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.269 [2024-12-09 17:33:18.556843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.269 [2024-12-09 17:33:18.556881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.269 [2024-12-09 17:33:18.556889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.269 [2024-12-09 17:33:18.556895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.269 [2024-12-09 17:33:18.556900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.269 [2024-12-09 17:33:18.558276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.269 [2024-12-09 17:33:18.558334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.269 [2024-12-09 17:33:18.558443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.269 [2024-12-09 17:33:18.558444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.269 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.269 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:52.269 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.269 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.269 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:52.269 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.269 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:52.269 17:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:55.556 17:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:55.556 17:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:55.556 17:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:55.556 17:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:55.814 17:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:55.814 17:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:55.814 17:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:55.814 17:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:55.814 17:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.814 [2024-12-09 17:33:22.338706] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.073 17:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.073 17:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:56.073 17:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.333 17:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:56.333 17:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:56.599 17:33:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.858 [2024-12-09 17:33:23.177827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.858 17:33:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:56.858 17:33:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:56.858 17:33:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:56.858 17:33:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:56.858 17:33:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:58.234 Initializing NVMe Controllers 00:22:58.234 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:58.234 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:58.234 Initialization complete. Launching workers. 00:22:58.234 ======================================================== 00:22:58.234 Latency(us) 00:22:58.234 Device Information : IOPS MiB/s Average min max 00:22:58.234 PCIE (0000:5e:00.0) NSID 1 from core 0: 99440.01 388.44 321.44 33.31 7247.70 00:22:58.234 ======================================================== 00:22:58.234 Total : 99440.01 388.44 321.44 33.31 7247.70 00:22:58.234 00:22:58.234 17:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:59.609 Initializing NVMe Controllers 00:22:59.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:59.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:59.609 Initialization complete. Launching workers. 00:22:59.609 ======================================================== 00:22:59.609 Latency(us) 00:22:59.609 Device Information : IOPS MiB/s Average min max 00:22:59.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 107.00 0.42 9625.48 103.26 45693.88 00:22:59.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18636.87 7944.46 48847.48 00:22:59.610 ======================================================== 00:22:59.610 Total : 163.00 0.64 12721.42 103.26 48847.48 00:22:59.610 00:22:59.610 17:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:00.985 Initializing NVMe Controllers 00:23:00.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:00.985 Initialization complete. Launching workers. 00:23:00.985 ======================================================== 00:23:00.985 Latency(us) 00:23:00.985 Device Information : IOPS MiB/s Average min max 00:23:00.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11316.98 44.21 2833.70 505.01 6939.92 00:23:00.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3915.99 15.30 8205.87 6723.07 15993.68 00:23:00.985 ======================================================== 00:23:00.985 Total : 15232.97 59.50 4214.74 505.01 15993.68 00:23:00.985 00:23:00.985 17:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:00.985 17:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:00.985 17:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:03.516 Initializing NVMe Controllers 00:23:03.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.516 Controller IO queue size 128, less than required. 00:23:03.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.516 Controller IO queue size 128, less than required. 00:23:03.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:03.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:03.516 Initialization complete. Launching workers. 00:23:03.516 ======================================================== 00:23:03.516 Latency(us) 00:23:03.516 Device Information : IOPS MiB/s Average min max 00:23:03.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1810.99 452.75 71730.84 46481.27 112329.33 00:23:03.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 601.50 150.38 218681.80 72176.19 323344.29 00:23:03.516 ======================================================== 00:23:03.516 Total : 2412.50 603.12 108369.76 46481.27 323344.29 00:23:03.516 00:23:03.516 17:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:03.774 No valid NVMe controllers or AIO or URING devices found 00:23:03.774 Initializing NVMe Controllers 00:23:03.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.774 Controller IO queue size 128, less than required. 00:23:03.774 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.774 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:03.774 Controller IO queue size 128, less than required. 00:23:03.774 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.774 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:03.774 WARNING: Some requested NVMe devices were skipped 00:23:03.774 17:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:06.307 Initializing NVMe Controllers 00:23:06.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.307 Controller IO queue size 128, less than required. 00:23:06.307 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.307 Controller IO queue size 128, less than required. 00:23:06.307 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:06.307 Initialization complete. Launching workers. 00:23:06.307 00:23:06.307 ==================== 00:23:06.307 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:06.307 TCP transport: 00:23:06.307 polls: 11637 00:23:06.307 idle_polls: 8245 00:23:06.307 sock_completions: 3392 00:23:06.307 nvme_completions: 6247 00:23:06.307 submitted_requests: 9360 00:23:06.307 queued_requests: 1 00:23:06.307 00:23:06.307 ==================== 00:23:06.307 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:06.307 TCP transport: 00:23:06.307 polls: 11763 00:23:06.307 idle_polls: 7763 00:23:06.307 sock_completions: 4000 00:23:06.307 nvme_completions: 6723 00:23:06.307 submitted_requests: 9966 00:23:06.307 queued_requests: 1 00:23:06.307 ======================================================== 00:23:06.307 Latency(us) 00:23:06.307 Device Information : IOPS MiB/s Average min max 00:23:06.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1561.28 390.32 84177.04 40182.11 144066.58 00:23:06.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1680.26 420.07 76235.11 46685.84 119129.35 00:23:06.307 ======================================================== 00:23:06.307 Total : 3241.54 810.38 80060.32 40182.11 144066.58 00:23:06.307 00:23:06.307 17:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:06.307 17:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.567 17:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:06.567 17:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:06.567 17:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:06.567 17:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:06.567 17:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:06.567 17:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:06.567 17:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:06.567 17:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.567 17:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:06.567 rmmod nvme_tcp 00:23:06.567 rmmod nvme_fabrics 00:23:06.567 rmmod nvme_keyring 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1987420 ']' 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1987420 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1987420 ']' 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1987420 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1987420 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1987420' 00:23:06.567 killing process with pid 1987420 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1987420 00:23:06.567 17:33:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1987420 00:23:08.471 17:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:08.471 17:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:08.471 17:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:08.471 17:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:08.471 17:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:08.471 17:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:08.471 17:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:08.471 17:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:08.471 17:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:08.471 17:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.471 17:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.471 17:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:10.379 00:23:10.379 real 0m24.459s 00:23:10.379 user 1m3.771s 00:23:10.379 sys 0m8.315s 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:10.379 ************************************ 00:23:10.379 END TEST nvmf_perf 00:23:10.379 ************************************ 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.379 ************************************ 00:23:10.379 START TEST nvmf_fio_host 00:23:10.379 ************************************ 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:10.379 * Looking for test storage... 00:23:10.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:10.379 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:10.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.639 --rc genhtml_branch_coverage=1 00:23:10.639 --rc genhtml_function_coverage=1 00:23:10.639 --rc genhtml_legend=1 00:23:10.639 --rc geninfo_all_blocks=1 00:23:10.639 --rc geninfo_unexecuted_blocks=1 00:23:10.639 00:23:10.639 ' 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:10.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.639 --rc genhtml_branch_coverage=1 00:23:10.639 --rc genhtml_function_coverage=1 00:23:10.639 --rc genhtml_legend=1 00:23:10.639 --rc geninfo_all_blocks=1 00:23:10.639 --rc geninfo_unexecuted_blocks=1 00:23:10.639 00:23:10.639 ' 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:10.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.639 --rc genhtml_branch_coverage=1 00:23:10.639 --rc genhtml_function_coverage=1 00:23:10.639 --rc genhtml_legend=1 00:23:10.639 --rc geninfo_all_blocks=1 00:23:10.639 --rc geninfo_unexecuted_blocks=1 00:23:10.639 00:23:10.639 ' 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:10.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.639 --rc genhtml_branch_coverage=1 00:23:10.639 --rc genhtml_function_coverage=1 00:23:10.639 --rc genhtml_legend=1 00:23:10.639 --rc geninfo_all_blocks=1 00:23:10.639 --rc geninfo_unexecuted_blocks=1 00:23:10.639 00:23:10.639 ' 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.639 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:10.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:10.640 17:33:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:17.209 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:17.209 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:17.209 Found net devices under 0000:af:00.0: cvl_0_0 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:17.209 Found net devices under 0000:af:00.1: cvl_0_1 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:17.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:23:17.209 00:23:17.209 --- 10.0.0.2 ping statistics --- 00:23:17.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.209 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:23:17.209 00:23:17.209 --- 10.0.0.1 ping statistics --- 00:23:17.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.209 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:17.209 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1993474 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1993474 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1993474 ']' 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.210 17:33:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.210 [2024-12-09 17:33:42.950699] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:23:17.210 [2024-12-09 17:33:42.950747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.210 [2024-12-09 17:33:43.027085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.210 [2024-12-09 17:33:43.068499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.210 [2024-12-09 17:33:43.068537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.210 [2024-12-09 17:33:43.068543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.210 [2024-12-09 17:33:43.068549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.210 [2024-12-09 17:33:43.068554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.210 [2024-12-09 17:33:43.069979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.210 [2024-12-09 17:33:43.070089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.210 [2024-12-09 17:33:43.070209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.210 [2024-12-09 17:33:43.070209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.469 17:33:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.469 17:33:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:17.469 17:33:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:17.469 [2024-12-09 17:33:43.954494] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.469 17:33:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:17.469 17:33:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:17.469 17:33:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.728 17:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:17.728 Malloc1 00:23:17.728 17:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:17.987 17:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:18.246 17:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.505 [2024-12-09 17:33:44.806556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.505 17:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:18.788 17:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:19.051 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:19.051 fio-3.35 00:23:19.051 Starting 1 thread 00:23:21.570 00:23:21.570 test: (groupid=0, jobs=1): err= 0: pid=1994063: Mon Dec 9 17:33:47 2024 00:23:21.570 read: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(93.6MiB/2005msec) 00:23:21.570 slat (nsec): min=1522, max=240008, avg=1680.14, stdev=2181.41 00:23:21.570 clat (usec): min=3179, max=10678, avg=5902.80, stdev=469.69 00:23:21.570 lat (usec): min=3212, max=10680, avg=5904.48, stdev=469.59 00:23:21.570 clat percentiles (usec): 00:23:21.570 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:23:21.570 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:23:21.570 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:23:21.570 | 99.00th=[ 6980], 99.50th=[ 7046], 99.90th=[ 9110], 99.95th=[ 9372], 00:23:21.570 | 99.99th=[10683] 00:23:21.570 bw ( KiB/s): min=46824, max=48440, per=99.93%, avg=47782.00, stdev=692.71, samples=4 00:23:21.570 iops : min=11706, max=12110, avg=11945.50, stdev=173.18, samples=4 00:23:21.570 write: IOPS=11.9k, BW=46.5MiB/s (48.7MB/s)(93.2MiB/2005msec); 0 zone resets 00:23:21.570 slat (nsec): min=1560, max=230689, avg=1747.08, stdev=1656.85 00:23:21.570 clat (usec): min=2437, max=9470, avg=4789.69, stdev=374.98 00:23:21.570 lat (usec): min=2453, max=9471, avg=4791.44, stdev=374.93 00:23:21.570 clat percentiles (usec): 00:23:21.570 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:23:21.570 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:23:21.570 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:23:21.570 | 99.00th=[ 5604], 99.50th=[ 5669], 99.90th=[ 7570], 99.95th=[ 8225], 00:23:21.570 | 99.99th=[ 9372] 00:23:21.570 bw ( KiB/s): min=47400, max=48000, per=100.00%, avg=47604.00, stdev=277.47, samples=4 00:23:21.570 iops : min=11850, max=12000, avg=11901.00, stdev=69.37, samples=4 00:23:21.570 lat (msec) : 4=0.84%, 10=99.15%, 20=0.01% 00:23:21.570 cpu : usr=74.85%, sys=24.25%, ctx=103, majf=0, minf=2 00:23:21.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:21.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:21.570 issued rwts: total=23968,23852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.570 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:21.570 00:23:21.570 Run status group 0 (all jobs): 00:23:21.570 READ: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=93.6MiB (98.2MB), run=2005-2005msec 00:23:21.570 WRITE: bw=46.5MiB/s (48.7MB/s), 46.5MiB/s-46.5MiB/s (48.7MB/s-48.7MB/s), io=93.2MiB (97.7MB), run=2005-2005msec 00:23:21.570 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:21.570 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:21.570 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:21.570 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:21.570 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:21.570 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:21.570 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:21.570 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:21.571 17:33:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:21.571 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:21.571 fio-3.35 00:23:21.571 Starting 1 thread 00:23:24.092 00:23:24.092 test: (groupid=0, jobs=1): err= 0: pid=1994617: Mon Dec 9 17:33:50 2024 00:23:24.092 read: IOPS=10.9k, BW=170MiB/s (178MB/s)(340MiB/2005msec) 00:23:24.092 slat (nsec): min=2436, max=81538, avg=2808.03, stdev=1276.46 00:23:24.092 clat (usec): min=1738, max=49504, avg=6854.18, stdev=3385.95 00:23:24.092 lat (usec): min=1741, max=49507, avg=6856.99, stdev=3386.01 00:23:24.092 clat percentiles (usec): 00:23:24.092 | 1.00th=[ 3621], 5.00th=[ 4178], 10.00th=[ 4621], 20.00th=[ 5211], 00:23:24.092 | 30.00th=[ 5669], 40.00th=[ 6194], 50.00th=[ 6718], 60.00th=[ 7111], 00:23:24.092 | 70.00th=[ 7439], 80.00th=[ 7832], 90.00th=[ 8586], 95.00th=[ 9372], 00:23:24.092 | 99.00th=[11731], 99.50th=[43779], 99.90th=[48497], 99.95th=[49021], 00:23:24.092 | 99.99th=[49546] 00:23:24.092 bw ( KiB/s): min=83648, max=96832, per=50.46%, avg=87648.00, stdev=6158.59, samples=4 00:23:24.092 iops : min= 5228, max= 6052, avg=5478.00, stdev=384.91, samples=4 00:23:24.092 write: IOPS=6459, BW=101MiB/s (106MB/s)(180MiB/1779msec); 0 zone resets 00:23:24.092 slat (usec): min=27, max=384, avg=31.56, stdev= 7.69 00:23:24.092 clat (usec): min=3105, max=14907, avg=8577.36, stdev=1554.13 00:23:24.092 lat (usec): min=3134, max=15019, avg=8608.92, stdev=1555.54 00:23:24.092 clat percentiles (usec): 00:23:24.092 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 7308], 00:23:24.092 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8356], 60.00th=[ 8717], 00:23:24.092 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11469], 00:23:24.092 | 99.00th=[12780], 99.50th=[13173], 99.90th=[14353], 99.95th=[14615], 00:23:24.092 | 99.99th=[14877] 00:23:24.092 bw ( KiB/s): min=86784, max=100800, per=88.38%, avg=91344.00, stdev=6418.19, samples=4 00:23:24.092 iops : min= 5424, max= 6300, avg=5709.00, stdev=401.14, samples=4 00:23:24.092 lat (msec) : 2=0.01%, 4=2.20%, 10=89.45%, 20=7.96%, 50=0.38% 00:23:24.092 cpu : usr=85.38%, sys=13.92%, ctx=45, majf=0, minf=2 00:23:24.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:24.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:24.092 issued rwts: total=21765,11491,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:24.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:24.092 00:23:24.092 Run status group 0 (all jobs): 00:23:24.092 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=340MiB (357MB), run=2005-2005msec 00:23:24.092 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=180MiB (188MB), run=1779-1779msec 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:24.092 rmmod nvme_tcp 00:23:24.092 rmmod nvme_fabrics 00:23:24.092 rmmod nvme_keyring 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1993474 ']' 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1993474 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1993474 ']' 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1993474 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.092 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1993474 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1993474' 00:23:24.351 killing process with pid 1993474 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1993474 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1993474 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.887 17:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:26.887 00:23:26.887 real 0m16.172s 00:23:26.887 user 0m47.978s 00:23:26.887 sys 0m6.411s 00:23:26.887 17:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.887 17:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.887 ************************************ 00:23:26.887 END TEST nvmf_fio_host 00:23:26.887 ************************************ 00:23:26.887 17:33:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:26.887 17:33:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:26.887 17:33:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.887 17:33:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.887 ************************************ 00:23:26.887 START TEST nvmf_failover 00:23:26.887 ************************************ 00:23:26.887 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:26.887 * Looking for test storage... 00:23:26.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:26.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.887 --rc genhtml_branch_coverage=1 00:23:26.887 --rc genhtml_function_coverage=1 00:23:26.887 --rc genhtml_legend=1 00:23:26.887 --rc geninfo_all_blocks=1 00:23:26.887 --rc geninfo_unexecuted_blocks=1 00:23:26.887 00:23:26.887 ' 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:26.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.887 --rc genhtml_branch_coverage=1 00:23:26.887 --rc genhtml_function_coverage=1 00:23:26.887 --rc genhtml_legend=1 00:23:26.887 --rc geninfo_all_blocks=1 00:23:26.887 --rc geninfo_unexecuted_blocks=1 00:23:26.887 00:23:26.887 ' 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:26.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.887 --rc genhtml_branch_coverage=1 00:23:26.887 --rc genhtml_function_coverage=1 00:23:26.887 --rc genhtml_legend=1 00:23:26.887 --rc geninfo_all_blocks=1 00:23:26.887 --rc geninfo_unexecuted_blocks=1 00:23:26.887 00:23:26.887 ' 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:26.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.887 --rc genhtml_branch_coverage=1 00:23:26.887 --rc genhtml_function_coverage=1 00:23:26.887 --rc genhtml_legend=1 00:23:26.887 --rc geninfo_all_blocks=1 00:23:26.887 --rc geninfo_unexecuted_blocks=1 00:23:26.887 00:23:26.887 ' 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.887 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:26.888 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:33.460 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:33.460 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:33.460 Found net devices under 0000:af:00.0: cvl_0_0 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:33.460 Found net devices under 0000:af:00.1: cvl_0_1 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:33.460 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.460 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.460 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:33.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:23:33.461 00:23:33.461 --- 10.0.0.2 ping statistics --- 00:23:33.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.461 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:23:33.461 00:23:33.461 --- 10.0.0.1 ping statistics --- 00:23:33.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.461 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1998501 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1998501 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1998501 ']' 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.461 [2024-12-09 17:33:59.138281] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:23:33.461 [2024-12-09 17:33:59.138321] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.461 [2024-12-09 17:33:59.216813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:33.461 [2024-12-09 17:33:59.256818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.461 [2024-12-09 17:33:59.256852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.461 [2024-12-09 17:33:59.256859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.461 [2024-12-09 17:33:59.256865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.461 [2024-12-09 17:33:59.256870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.461 [2024-12-09 17:33:59.258200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.461 [2024-12-09 17:33:59.258306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.461 [2024-12-09 17:33:59.258307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:33.461 [2024-12-09 17:33:59.554512] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:33.461 Malloc0 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:33.461 17:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:33.720 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.978 [2024-12-09 17:34:00.380177] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.978 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:34.243 [2024-12-09 17:34:00.588734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:34.243 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:34.501 [2024-12-09 17:34:00.789375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:34.501 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:34.501 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1998782 00:23:34.501 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.501 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1998782 /var/tmp/bdevperf.sock 00:23:34.501 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1998782 ']' 00:23:34.501 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.501 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.501 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.501 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.501 17:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:34.759 17:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.759 17:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:34.759 17:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:35.018 NVMe0n1 00:23:35.018 17:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:35.583 00:23:35.583 17:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1998979 00:23:35.583 17:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.583 17:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:36.518 17:34:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.518 [2024-12-09 17:34:03.048790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a53e0 is same with the state(6) to be set 00:23:36.518 [2024-12-09 17:34:03.048837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a53e0 is same with the state(6) to be set 00:23:36.518 [2024-12-09 17:34:03.048845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a53e0 is same with the state(6) to be set 00:23:36.518 [2024-12-09 17:34:03.048851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a53e0 is same with the state(6) to be set 00:23:36.518 [2024-12-09 17:34:03.048857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a53e0 is same with the state(6) to be set 00:23:36.776 17:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:40.061 17:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:40.061 00:23:40.061 17:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:40.061 [2024-12-09 17:34:06.591786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.591996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.592002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.592007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.592013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.592019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.592025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.592031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.061 [2024-12-09 17:34:06.592037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a6170 is same with the state(6) to be set 00:23:40.319 17:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:43.711 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.711 [2024-12-09 17:34:09.803907] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.711 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:44.667 17:34:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:44.667 [2024-12-09 17:34:11.017855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f22b0 is same with the state(6) to be set 00:23:44.667 [2024-12-09 17:34:11.017892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f22b0 is same with the state(6) to be set 00:23:44.667 [2024-12-09 17:34:11.017899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f22b0 is same with the state(6) to be set 00:23:44.667 [2024-12-09 17:34:11.017915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f22b0 is same with the state(6) to be set 00:23:44.667 [2024-12-09 17:34:11.017921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f22b0 is same with the state(6) to be set 00:23:44.667 [2024-12-09 17:34:11.017927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f22b0 is same with the state(6) to be set 00:23:44.667 [2024-12-09 17:34:11.017933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f22b0 is same with the state(6) to be set 00:23:44.667 17:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1998979 00:23:51.237 { 00:23:51.237 "results": [ 00:23:51.237 { 00:23:51.237 "job": "NVMe0n1", 00:23:51.237 "core_mask": "0x1", 00:23:51.237 "workload": "verify", 00:23:51.237 "status": "finished", 00:23:51.237 "verify_range": { 00:23:51.237 "start": 0, 00:23:51.237 "length": 16384 00:23:51.237 }, 00:23:51.237 "queue_depth": 128, 00:23:51.237 "io_size": 4096, 00:23:51.237 "runtime": 15.044126, 00:23:51.237 "iops": 11126.335953314934, 00:23:51.237 "mibps": 43.46224981763646, 00:23:51.237 "io_failed": 15389, 00:23:51.237 "io_timeout": 0, 00:23:51.237 "avg_latency_us": 10486.528382505161, 00:23:51.237 "min_latency_us": 417.40190476190475, 00:23:51.237 "max_latency_us": 43690.666666666664 00:23:51.237 } 00:23:51.237 ], 00:23:51.237 "core_count": 1 00:23:51.237 } 00:23:51.237 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1998782 00:23:51.237 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1998782 ']' 00:23:51.237 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1998782 00:23:51.237 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:51.237 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.237 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1998782 00:23:51.237 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.237 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.237 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1998782' 00:23:51.237 killing process with pid 1998782 00:23:51.237 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1998782 00:23:51.237 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1998782 00:23:51.237 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:51.237 [2024-12-09 17:34:00.853386] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:23:51.237 [2024-12-09 17:34:00.853436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998782 ] 00:23:51.237 [2024-12-09 17:34:00.926964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.237 [2024-12-09 17:34:00.966735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.237 Running I/O for 15 seconds... 00:23:51.237 11243.00 IOPS, 43.92 MiB/s [2024-12-09T16:34:17.777Z] [2024-12-09 17:34:03.049553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.237 [2024-12-09 17:34:03.049585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.237 [2024-12-09 17:34:03.049601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.237 [2024-12-09 17:34:03.049609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.237 [2024-12-09 17:34:03.049618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.237 [2024-12-09 17:34:03.049625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.237 [2024-12-09 17:34:03.049634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.237 [2024-12-09 17:34:03.049640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.237 [2024-12-09 17:34:03.049648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.237 [2024-12-09 17:34:03.049655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.237 [2024-12-09 17:34:03.049663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.237 [2024-12-09 17:34:03.049669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.049991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.049997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.050011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.238 [2024-12-09 17:34:03.050026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.238 [2024-12-09 17:34:03.050041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.238 [2024-12-09 17:34:03.050055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.238 [2024-12-09 17:34:03.050068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.238 [2024-12-09 17:34:03.050083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.238 [2024-12-09 17:34:03.050097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.238 [2024-12-09 17:34:03.050111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.050131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.050146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.050160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.050181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.050196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.050210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.238 [2024-12-09 17:34:03.050224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.238 [2024-12-09 17:34:03.050232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.239 [2024-12-09 17:34:03.050766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.239 [2024-12-09 17:34:03.050774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.050988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.050995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.051010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.240 [2024-12-09 17:34:03.051024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.240 [2024-12-09 17:34:03.051053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100000 len:8 PRP1 0x0 PRP2 0x0 00:23:51.240 [2024-12-09 17:34:03.051060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.240 [2024-12-09 17:34:03.051077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.240 [2024-12-09 17:34:03.051082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100008 len:8 PRP1 0x0 PRP2 0x0 00:23:51.240 [2024-12-09 17:34:03.051089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.240 [2024-12-09 17:34:03.051102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.240 [2024-12-09 17:34:03.051107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100016 len:8 PRP1 0x0 PRP2 0x0 00:23:51.240 [2024-12-09 17:34:03.051114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.240 [2024-12-09 17:34:03.051125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.240 [2024-12-09 17:34:03.051131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100024 len:8 PRP1 0x0 PRP2 0x0 00:23:51.240 [2024-12-09 17:34:03.051137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.240 [2024-12-09 17:34:03.051148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.240 [2024-12-09 17:34:03.051154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100032 len:8 PRP1 0x0 PRP2 0x0 00:23:51.240 [2024-12-09 17:34:03.051160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.240 [2024-12-09 17:34:03.051176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.240 [2024-12-09 17:34:03.051182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100040 len:8 PRP1 0x0 PRP2 0x0 00:23:51.240 [2024-12-09 17:34:03.051188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.240 [2024-12-09 17:34:03.051199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.240 [2024-12-09 17:34:03.051204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100048 len:8 PRP1 0x0 PRP2 0x0 00:23:51.240 [2024-12-09 17:34:03.051210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.240 [2024-12-09 17:34:03.051229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.240 [2024-12-09 17:34:03.051235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100056 len:8 PRP1 0x0 PRP2 0x0 00:23:51.240 [2024-12-09 17:34:03.051241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.240 [2024-12-09 17:34:03.051254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.240 [2024-12-09 17:34:03.051260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100064 len:8 PRP1 0x0 PRP2 0x0 00:23:51.240 [2024-12-09 17:34:03.051266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.240 [2024-12-09 17:34:03.051277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.240 [2024-12-09 17:34:03.051283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100072 len:8 PRP1 0x0 PRP2 0x0 00:23:51.240 [2024-12-09 17:34:03.051289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.240 [2024-12-09 17:34:03.051295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.240 [2024-12-09 17:34:03.051300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.240 [2024-12-09 17:34:03.051306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100080 len:8 PRP1 0x0 PRP2 0x0 00:23:51.240 [2024-12-09 17:34:03.051312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100088 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100096 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100104 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100112 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100120 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100128 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100136 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100144 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100152 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100160 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100168 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99216 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99224 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99232 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99240 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99248 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99256 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.051705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.051711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.241 [2024-12-09 17:34:03.051716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.241 [2024-12-09 17:34:03.051721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99264 len:8 PRP1 0x0 PRP2 0x0 00:23:51.241 [2024-12-09 17:34:03.062519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.062567] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:51.241 [2024-12-09 17:34:03.062590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.241 [2024-12-09 17:34:03.062597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.062605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.241 [2024-12-09 17:34:03.062611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.062618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.241 [2024-12-09 17:34:03.062624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.062633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.241 [2024-12-09 17:34:03.062639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.241 [2024-12-09 17:34:03.062648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:51.241 [2024-12-09 17:34:03.062686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107b570 (9): Bad file descriptor 00:23:51.241 [2024-12-09 17:34:03.065976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:51.241 [2024-12-09 17:34:03.248427] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:51.242 10289.50 IOPS, 40.19 MiB/s [2024-12-09T16:34:17.782Z] 10695.67 IOPS, 41.78 MiB/s [2024-12-09T16:34:17.782Z] 10879.50 IOPS, 42.50 MiB/s [2024-12-09T16:34:17.782Z] [2024-12-09 17:34:06.592715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.592750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.242 [2024-12-09 17:34:06.592772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.242 [2024-12-09 17:34:06.592788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.242 [2024-12-09 17:34:06.592802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.242 [2024-12-09 17:34:06.592816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.242 [2024-12-09 17:34:06.592831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.242 [2024-12-09 17:34:06.592845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.242 [2024-12-09 17:34:06.592860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.592874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.592889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.592907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.592921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.592936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.592950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.592965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.592979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.592987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.592994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.593001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.593008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.593015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.593021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.593029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.593036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.593043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.593049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.593057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.593063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.593071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.593080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.593088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.593095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.593103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.593109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.593117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.593123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.593131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.593137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.593145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.593151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.242 [2024-12-09 17:34:06.593159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.242 [2024-12-09 17:34:06.593172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.243 [2024-12-09 17:34:06.593186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.243 [2024-12-09 17:34:06.593201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.243 [2024-12-09 17:34:06.593216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.243 [2024-12-09 17:34:06.593230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.243 [2024-12-09 17:34:06.593245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.243 [2024-12-09 17:34:06.593259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.243 [2024-12-09 17:34:06.593275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.243 [2024-12-09 17:34:06.593289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.243 [2024-12-09 17:34:06.593303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.243 [2024-12-09 17:34:06.593703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.243 [2024-12-09 17:34:06.593711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.593986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.593994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.244 [2024-12-09 17:34:06.594201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.244 [2024-12-09 17:34:06.594227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82648 len:8 PRP1 0x0 PRP2 0x0 00:23:51.244 [2024-12-09 17:34:06.594233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.244 [2024-12-09 17:34:06.594276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.244 [2024-12-09 17:34:06.594283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.245 [2024-12-09 17:34:06.594290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.245 [2024-12-09 17:34:06.594304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.245 [2024-12-09 17:34:06.594317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107b570 is same with the state(6) to be set 00:23:51.245 [2024-12-09 17:34:06.594498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82656 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82664 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82672 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82680 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82688 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82696 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82704 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82712 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82720 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82728 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82736 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81968 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81976 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81984 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81992 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82000 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82008 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.594896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.594902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.594907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82016 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.594913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.605870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.605879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.605886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82024 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.605894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.245 [2024-12-09 17:34:06.605901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.245 [2024-12-09 17:34:06.605906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.245 [2024-12-09 17:34:06.605912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82032 len:8 PRP1 0x0 PRP2 0x0 00:23:51.245 [2024-12-09 17:34:06.605918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.605925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.605930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.605936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82040 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.605943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.605949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.605954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.605959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82048 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.605965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.605972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.605977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.605982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82056 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.605989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.605995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82064 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82072 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82080 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81720 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82088 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82096 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82104 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82112 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82120 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82128 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82136 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81728 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81736 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81744 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81752 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81760 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81768 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81776 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.246 [2024-12-09 17:34:06.606428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.246 [2024-12-09 17:34:06.606433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.246 [2024-12-09 17:34:06.606438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81784 len:8 PRP1 0x0 PRP2 0x0 00:23:51.246 [2024-12-09 17:34:06.606445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81792 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81800 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81808 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81816 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81824 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81832 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81840 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81848 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81856 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81864 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81872 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81880 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81888 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81896 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81904 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81912 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81928 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.247 [2024-12-09 17:34:06.606882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81936 len:8 PRP1 0x0 PRP2 0x0 00:23:51.247 [2024-12-09 17:34:06.606888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.247 [2024-12-09 17:34:06.606894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.247 [2024-12-09 17:34:06.606899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.606905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81944 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.606911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.606918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.606922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.606928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81952 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.606934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.606940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.606945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.606951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81960 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.606957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.606964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.606970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.606976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82144 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.606982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.606988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.606993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.606998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82152 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.607005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.607011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.607016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.607022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82160 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.607029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.607036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.607041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.607046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82168 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.607053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.607059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.607064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.607069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82176 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.607076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.607082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.607087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.607092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82184 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.607098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.607105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.607110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.607116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82192 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.607122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.607128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.607133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.607138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82200 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.607144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.607153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.607158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.607163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82208 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.607174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.607180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.607185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.607190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82216 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.607197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.607203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.607208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.607214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82224 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.607222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.607229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.607234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.607239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82232 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.607245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.607251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.607256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.607262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82240 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.607269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.615083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.615094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.615103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82248 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.615113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.615122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.615129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.615137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82256 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.615145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.615154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.615161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.615172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82264 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.615184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.615193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.615199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.615207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82272 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.615215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.615225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.615231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.615239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82280 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.615247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.615256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.615263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.615270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82288 len:8 PRP1 0x0 PRP2 0x0 00:23:51.248 [2024-12-09 17:34:06.615280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.248 [2024-12-09 17:34:06.615290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.248 [2024-12-09 17:34:06.615296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.248 [2024-12-09 17:34:06.615304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82296 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82304 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82312 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82320 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82328 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82336 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82344 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82352 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82360 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82368 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82376 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82384 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82392 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82400 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82408 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82416 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82424 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82432 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82440 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82448 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82456 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.249 [2024-12-09 17:34:06.615966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82464 len:8 PRP1 0x0 PRP2 0x0 00:23:51.249 [2024-12-09 17:34:06.615974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.249 [2024-12-09 17:34:06.615983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.249 [2024-12-09 17:34:06.615990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.615997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82472 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82480 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82488 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82496 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82504 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82512 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82520 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82528 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82536 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82544 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82552 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82560 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82568 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82576 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82584 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82592 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82600 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82608 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82616 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82624 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82632 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.250 [2024-12-09 17:34:06.616653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.250 [2024-12-09 17:34:06.616660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.250 [2024-12-09 17:34:06.616667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82640 len:8 PRP1 0x0 PRP2 0x0 00:23:51.250 [2024-12-09 17:34:06.616675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:06.616684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.251 [2024-12-09 17:34:06.616691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.251 [2024-12-09 17:34:06.616698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82648 len:8 PRP1 0x0 PRP2 0x0 00:23:51.251 [2024-12-09 17:34:06.616706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:06.616755] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:51.251 [2024-12-09 17:34:06.616767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:51.251 [2024-12-09 17:34:06.616805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107b570 (9): Bad file descriptor 00:23:51.251 [2024-12-09 17:34:06.621052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:51.251 [2024-12-09 17:34:06.690780] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:51.251 10733.20 IOPS, 41.93 MiB/s [2024-12-09T16:34:17.791Z] 10844.50 IOPS, 42.36 MiB/s [2024-12-09T16:34:17.791Z] 10923.86 IOPS, 42.67 MiB/s [2024-12-09T16:34:17.791Z] 11007.00 IOPS, 43.00 MiB/s [2024-12-09T16:34:17.791Z] 11043.67 IOPS, 43.14 MiB/s [2024-12-09T16:34:17.791Z] [2024-12-09 17:34:11.018119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.251 [2024-12-09 17:34:11.018433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.251 [2024-12-09 17:34:11.018573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.251 [2024-12-09 17:34:11.018579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.252 [2024-12-09 17:34:11.018964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.252 [2024-12-09 17:34:11.018978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.018988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.252 [2024-12-09 17:34:11.018994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.019002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.252 [2024-12-09 17:34:11.019009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.019017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.252 [2024-12-09 17:34:11.019024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.019031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.252 [2024-12-09 17:34:11.019038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.019046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.252 [2024-12-09 17:34:11.019052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.019060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.252 [2024-12-09 17:34:11.019066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.019074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.252 [2024-12-09 17:34:11.019080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.019088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.252 [2024-12-09 17:34:11.019095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.252 [2024-12-09 17:34:11.019102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.252 [2024-12-09 17:34:11.019108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.253 [2024-12-09 17:34:11.019197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.253 [2024-12-09 17:34:11.019649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.253 [2024-12-09 17:34:11.019657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.254 [2024-12-09 17:34:11.019896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.254 [2024-12-09 17:34:11.019910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.254 [2024-12-09 17:34:11.019924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.254 [2024-12-09 17:34:11.019938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.254 [2024-12-09 17:34:11.019951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.254 [2024-12-09 17:34:11.019966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.254 [2024-12-09 17:34:11.019980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.019988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.254 [2024-12-09 17:34:11.019994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.020013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.254 [2024-12-09 17:34:11.020019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.254 [2024-12-09 17:34:11.020025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110128 len:8 PRP1 0x0 PRP2 0x0 00:23:51.254 [2024-12-09 17:34:11.020034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.020080] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:51.254 [2024-12-09 17:34:11.020102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.254 [2024-12-09 17:34:11.020109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.020116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.254 [2024-12-09 17:34:11.020123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.020130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.254 [2024-12-09 17:34:11.020136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.020145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.254 [2024-12-09 17:34:11.020152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.254 [2024-12-09 17:34:11.020159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:51.254 [2024-12-09 17:34:11.022951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:51.254 [2024-12-09 17:34:11.022983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107b570 (9): Bad file descriptor 00:23:51.254 [2024-12-09 17:34:11.089291] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:51.254 11002.60 IOPS, 42.98 MiB/s [2024-12-09T16:34:17.794Z] 11032.00 IOPS, 43.09 MiB/s [2024-12-09T16:34:17.794Z] 11071.17 IOPS, 43.25 MiB/s [2024-12-09T16:34:17.794Z] 11101.92 IOPS, 43.37 MiB/s [2024-12-09T16:34:17.794Z] 11137.64 IOPS, 43.51 MiB/s [2024-12-09T16:34:17.794Z] 11158.93 IOPS, 43.59 MiB/s 00:23:51.254 Latency(us) 00:23:51.254 [2024-12-09T16:34:17.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.254 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:51.254 Verification LBA range: start 0x0 length 0x4000 00:23:51.254 NVMe0n1 : 15.04 11126.34 43.46 1022.92 0.00 10486.53 417.40 43690.67 00:23:51.254 [2024-12-09T16:34:17.794Z] =================================================================================================================== 00:23:51.254 [2024-12-09T16:34:17.794Z] Total : 11126.34 43.46 1022.92 0.00 10486.53 417.40 43690.67 00:23:51.254 Received shutdown signal, test time was about 15.000000 seconds 00:23:51.254 00:23:51.254 Latency(us) 00:23:51.254 [2024-12-09T16:34:17.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.254 [2024-12-09T16:34:17.795Z] =================================================================================================================== 00:23:51.255 [2024-12-09T16:34:17.795Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2001461 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2001461 /var/tmp/bdevperf.sock 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2001461 ']' 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:51.255 [2024-12-09 17:34:17.701520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:51.255 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:51.513 [2024-12-09 17:34:17.898097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:51.513 17:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:51.771 NVMe0n1 00:23:51.771 17:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:52.338 00:23:52.338 17:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:52.596 00:23:52.596 17:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:52.596 17:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:52.596 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:52.854 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:56.139 17:34:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:56.139 17:34:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:56.139 17:34:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2002190 00:23:56.139 17:34:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:56.140 17:34:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2002190 00:23:57.075 { 00:23:57.075 "results": [ 00:23:57.075 { 00:23:57.075 "job": "NVMe0n1", 00:23:57.075 "core_mask": "0x1", 00:23:57.075 "workload": "verify", 00:23:57.075 "status": "finished", 00:23:57.075 "verify_range": { 00:23:57.075 "start": 0, 00:23:57.075 "length": 16384 00:23:57.075 }, 00:23:57.075 "queue_depth": 128, 00:23:57.075 "io_size": 4096, 00:23:57.075 "runtime": 1.009125, 00:23:57.075 "iops": 11383.128948346339, 00:23:57.075 "mibps": 44.465347454477886, 00:23:57.075 "io_failed": 0, 00:23:57.075 "io_timeout": 0, 00:23:57.075 "avg_latency_us": 11188.256762634366, 00:23:57.075 "min_latency_us": 2371.7790476190476, 00:23:57.075 "max_latency_us": 9611.946666666667 00:23:57.075 } 00:23:57.075 ], 00:23:57.075 "core_count": 1 00:23:57.075 } 00:23:57.334 17:34:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.334 [2024-12-09 17:34:17.325602] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:23:57.334 [2024-12-09 17:34:17.325655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001461 ] 00:23:57.334 [2024-12-09 17:34:17.398205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.334 [2024-12-09 17:34:17.434705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.334 [2024-12-09 17:34:19.265813] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:57.334 [2024-12-09 17:34:19.265856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.334 [2024-12-09 17:34:19.265868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.334 [2024-12-09 17:34:19.265876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.334 [2024-12-09 17:34:19.265883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.334 [2024-12-09 17:34:19.265890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.334 [2024-12-09 17:34:19.265896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.334 [2024-12-09 17:34:19.265903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.334 [2024-12-09 17:34:19.265910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.334 [2024-12-09 17:34:19.265916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:57.334 [2024-12-09 17:34:19.265942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:57.334 [2024-12-09 17:34:19.265956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86a570 (9): Bad file descriptor 00:23:57.334 [2024-12-09 17:34:19.314396] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:57.334 Running I/O for 1 seconds... 00:23:57.334 11307.00 IOPS, 44.17 MiB/s 00:23:57.334 Latency(us) 00:23:57.334 [2024-12-09T16:34:23.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.334 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:57.334 Verification LBA range: start 0x0 length 0x4000 00:23:57.334 NVMe0n1 : 1.01 11383.13 44.47 0.00 0.00 11188.26 2371.78 9611.95 00:23:57.334 [2024-12-09T16:34:23.874Z] =================================================================================================================== 00:23:57.334 [2024-12-09T16:34:23.874Z] Total : 11383.13 44.47 0.00 0.00 11188.26 2371.78 9611.95 00:23:57.334 17:34:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:57.334 17:34:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:57.334 17:34:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:57.593 17:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:57.593 17:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:57.851 17:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:58.109 17:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2001461 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2001461 ']' 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2001461 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2001461 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2001461' 00:24:01.406 killing process with pid 2001461 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2001461 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2001461 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:01.406 17:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.665 rmmod nvme_tcp 00:24:01.665 rmmod nvme_fabrics 00:24:01.665 rmmod nvme_keyring 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1998501 ']' 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1998501 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1998501 ']' 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1998501 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1998501 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1998501' 00:24:01.665 killing process with pid 1998501 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1998501 00:24:01.665 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1998501 00:24:01.924 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.924 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:01.924 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:01.924 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:01.924 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:01.924 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:01.924 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:01.924 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.924 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:01.924 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.924 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.924 17:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:04.463 00:24:04.463 real 0m37.459s 00:24:04.463 user 1m58.788s 00:24:04.463 sys 0m7.884s 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:04.463 ************************************ 00:24:04.463 END TEST nvmf_failover 00:24:04.463 ************************************ 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.463 ************************************ 00:24:04.463 START TEST nvmf_host_discovery 00:24:04.463 ************************************ 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:04.463 * Looking for test storage... 00:24:04.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.463 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:04.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.464 --rc genhtml_branch_coverage=1 00:24:04.464 --rc genhtml_function_coverage=1 00:24:04.464 --rc genhtml_legend=1 00:24:04.464 --rc geninfo_all_blocks=1 00:24:04.464 --rc geninfo_unexecuted_blocks=1 00:24:04.464 00:24:04.464 ' 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:04.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.464 --rc genhtml_branch_coverage=1 00:24:04.464 --rc genhtml_function_coverage=1 00:24:04.464 --rc genhtml_legend=1 00:24:04.464 --rc geninfo_all_blocks=1 00:24:04.464 --rc geninfo_unexecuted_blocks=1 00:24:04.464 00:24:04.464 ' 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:04.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.464 --rc genhtml_branch_coverage=1 00:24:04.464 --rc genhtml_function_coverage=1 00:24:04.464 --rc genhtml_legend=1 00:24:04.464 --rc geninfo_all_blocks=1 00:24:04.464 --rc geninfo_unexecuted_blocks=1 00:24:04.464 00:24:04.464 ' 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:04.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.464 --rc genhtml_branch_coverage=1 00:24:04.464 --rc genhtml_function_coverage=1 00:24:04.464 --rc genhtml_legend=1 00:24:04.464 --rc geninfo_all_blocks=1 00:24:04.464 --rc geninfo_unexecuted_blocks=1 00:24:04.464 00:24:04.464 ' 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:04.464 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:09.742 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:09.743 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:09.743 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:09.743 Found net devices under 0000:af:00.0: cvl_0_0 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:09.743 Found net devices under 0000:af:00.1: cvl_0_1 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:09.743 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.002 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.003 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.003 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:24:10.003 00:24:10.003 --- 10.0.0.2 ping statistics --- 00:24:10.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.003 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:24:10.003 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:24:10.003 00:24:10.003 --- 10.0.0.1 ping statistics --- 00:24:10.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.003 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2006614 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2006614 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2006614 ']' 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.262 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.262 [2024-12-09 17:34:36.639040] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:24:10.262 [2024-12-09 17:34:36.639084] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.262 [2024-12-09 17:34:36.717705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.262 [2024-12-09 17:34:36.758940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.262 [2024-12-09 17:34:36.758972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.262 [2024-12-09 17:34:36.758980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.262 [2024-12-09 17:34:36.758985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.262 [2024-12-09 17:34:36.758991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.262 [2024-12-09 17:34:36.759492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.521 [2024-12-09 17:34:36.895775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.521 [2024-12-09 17:34:36.907949] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.521 null0 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.521 null1 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2006747 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2006747 /tmp/host.sock 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2006747 ']' 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:10.521 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.521 17:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.521 [2024-12-09 17:34:36.982266] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:24:10.521 [2024-12-09 17:34:36.982307] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006747 ] 00:24:10.521 [2024-12-09 17:34:37.052908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.780 [2024-12-09 17:34:37.096158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.039 [2024-12-09 17:34:37.521520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:11.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:11.298 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:11.865 [2024-12-09 17:34:38.263313] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:11.865 [2024-12-09 17:34:38.263335] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:11.865 [2024-12-09 17:34:38.263348] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:11.865 [2024-12-09 17:34:38.350599] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:12.123 [2024-12-09 17:34:38.528451] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:12.123 [2024-12-09 17:34:38.529232] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c3fee0:1 started. 00:24:12.123 [2024-12-09 17:34:38.530587] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:12.123 [2024-12-09 17:34:38.530603] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:12.123 [2024-12-09 17:34:38.532854] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c3fee0 was disconnected and freed. delete nvme_qpair. 00:24:12.382 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.382 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:12.382 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:12.382 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:12.382 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:12.382 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.382 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:12.382 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.382 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:12.382 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.382 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.383 [2024-12-09 17:34:38.920809] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c400c0:1 started. 00:24:12.383 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.642 [2024-12-09 17:34:38.923454] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c400c0 was disconnected and freed. delete nvme_qpair. 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.642 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.643 17:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.643 [2024-12-09 17:34:39.025635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:12.643 [2024-12-09 17:34:39.026570] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:12.643 [2024-12-09 17:34:39.026589] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.643 [2024-12-09 17:34:39.152951] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:12.643 17:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:12.902 [2024-12-09 17:34:39.218552] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:12.902 [2024-12-09 17:34:39.218586] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:12.902 [2024-12-09 17:34:39.218593] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:12.902 [2024-12-09 17:34:39.218598] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.840 [2024-12-09 17:34:40.253531] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:13.840 [2024-12-09 17:34:40.253560] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.840 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:13.841 [2024-12-09 17:34:40.262833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.841 [2024-12-09 17:34:40.262854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.841 [2024-12-09 17:34:40.262863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.841 [2024-12-09 17:34:40.262870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.841 [2024-12-09 17:34:40.262893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.841 [2024-12-09 17:34:40.262900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.841 [2024-12-09 17:34:40.262907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.841 [2024-12-09 17:34:40.262914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.841 [2024-12-09 17:34:40.262921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c103d0 is same with the state(6) to be set 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:13.841 [2024-12-09 17:34:40.272845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c103d0 (9): Bad file descriptor 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.841 [2024-12-09 17:34:40.282880] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:13.841 [2024-12-09 17:34:40.282891] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:13.841 [2024-12-09 17:34:40.282897] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:13.841 [2024-12-09 17:34:40.282902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:13.841 [2024-12-09 17:34:40.282920] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:13.841 [2024-12-09 17:34:40.283214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.841 [2024-12-09 17:34:40.283229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c103d0 with addr=10.0.0.2, port=4420 00:24:13.841 [2024-12-09 17:34:40.283237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c103d0 is same with the state(6) to be set 00:24:13.841 [2024-12-09 17:34:40.283254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c103d0 (9): Bad file descriptor 00:24:13.841 [2024-12-09 17:34:40.283265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:13.841 [2024-12-09 17:34:40.283271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:13.841 [2024-12-09 17:34:40.283281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:13.841 [2024-12-09 17:34:40.283287] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:13.841 [2024-12-09 17:34:40.283292] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:13.841 [2024-12-09 17:34:40.283297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:13.841 [2024-12-09 17:34:40.292951] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:13.841 [2024-12-09 17:34:40.292961] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:13.841 [2024-12-09 17:34:40.292966] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:13.841 [2024-12-09 17:34:40.292970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:13.841 [2024-12-09 17:34:40.292982] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:13.841 [2024-12-09 17:34:40.293245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.841 [2024-12-09 17:34:40.293257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c103d0 with addr=10.0.0.2, port=4420 00:24:13.841 [2024-12-09 17:34:40.293265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c103d0 is same with the state(6) to be set 00:24:13.841 [2024-12-09 17:34:40.293275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c103d0 (9): Bad file descriptor 00:24:13.841 [2024-12-09 17:34:40.293285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:13.841 [2024-12-09 17:34:40.293291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:13.841 [2024-12-09 17:34:40.293298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:13.841 [2024-12-09 17:34:40.293303] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:13.841 [2024-12-09 17:34:40.293307] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:13.841 [2024-12-09 17:34:40.293311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:13.841 [2024-12-09 17:34:40.303014] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:13.841 [2024-12-09 17:34:40.303033] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:13.841 [2024-12-09 17:34:40.303037] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:13.841 [2024-12-09 17:34:40.303041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:13.841 [2024-12-09 17:34:40.303055] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:13.841 [2024-12-09 17:34:40.303246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.841 [2024-12-09 17:34:40.303259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c103d0 with addr=10.0.0.2, port=4420 00:24:13.841 [2024-12-09 17:34:40.303273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c103d0 is same with the state(6) to be set 00:24:13.841 [2024-12-09 17:34:40.303283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c103d0 (9): Bad file descriptor 00:24:13.841 [2024-12-09 17:34:40.303293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:13.841 [2024-12-09 17:34:40.303298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:13.841 [2024-12-09 17:34:40.303305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:13.841 [2024-12-09 17:34:40.303310] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:13.841 [2024-12-09 17:34:40.303315] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:13.841 [2024-12-09 17:34:40.303319] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:13.841 [2024-12-09 17:34:40.313085] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:13.841 [2024-12-09 17:34:40.313098] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:13.841 [2024-12-09 17:34:40.313102] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:13.841 [2024-12-09 17:34:40.313106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:13.841 [2024-12-09 17:34:40.313118] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:13.841 [2024-12-09 17:34:40.313294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.841 [2024-12-09 17:34:40.313305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c103d0 with addr=10.0.0.2, port=4420 00:24:13.841 [2024-12-09 17:34:40.313312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c103d0 is same with the state(6) to be set 00:24:13.841 [2024-12-09 17:34:40.313322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c103d0 (9): Bad file descriptor 00:24:13.841 [2024-12-09 17:34:40.313332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:13.841 [2024-12-09 17:34:40.313338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:13.841 [2024-12-09 17:34:40.313345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:13.841 [2024-12-09 17:34:40.313350] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:13.841 [2024-12-09 17:34:40.313355] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:13.841 [2024-12-09 17:34:40.313358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.841 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:13.841 [2024-12-09 17:34:40.323147] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:13.841 [2024-12-09 17:34:40.323163] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:13.842 [2024-12-09 17:34:40.323171] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:13.842 [2024-12-09 17:34:40.323175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:13.842 [2024-12-09 17:34:40.323190] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:13.842 [2024-12-09 17:34:40.323418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.842 [2024-12-09 17:34:40.323430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c103d0 with addr=10.0.0.2, port=4420 00:24:13.842 [2024-12-09 17:34:40.323438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c103d0 is same with the state(6) to be set 00:24:13.842 [2024-12-09 17:34:40.323448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c103d0 (9): Bad file descriptor 00:24:13.842 [2024-12-09 17:34:40.323457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:13.842 [2024-12-09 17:34:40.323463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:13.842 [2024-12-09 17:34:40.323469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:13.842 [2024-12-09 17:34:40.323475] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:13.842 [2024-12-09 17:34:40.323479] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:13.842 [2024-12-09 17:34:40.323483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:13.842 [2024-12-09 17:34:40.333220] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:13.842 [2024-12-09 17:34:40.333230] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:13.842 [2024-12-09 17:34:40.333234] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:13.842 [2024-12-09 17:34:40.333238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:13.842 [2024-12-09 17:34:40.333250] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:13.842 [2024-12-09 17:34:40.333462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.842 [2024-12-09 17:34:40.333473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c103d0 with addr=10.0.0.2, port=4420 00:24:13.842 [2024-12-09 17:34:40.333480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c103d0 is same with the state(6) to be set 00:24:13.842 [2024-12-09 17:34:40.333490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c103d0 (9): Bad file descriptor 00:24:13.842 [2024-12-09 17:34:40.333499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:13.842 [2024-12-09 17:34:40.333509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:13.842 [2024-12-09 17:34:40.333515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:13.842 [2024-12-09 17:34:40.333520] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:13.842 [2024-12-09 17:34:40.333525] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:13.842 [2024-12-09 17:34:40.333529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:13.842 [2024-12-09 17:34:40.340189] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:13.842 [2024-12-09 17:34:40.340204] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:13.842 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:14.101 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.102 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.480 [2024-12-09 17:34:41.672300] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:15.481 [2024-12-09 17:34:41.672317] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:15.481 [2024-12-09 17:34:41.672329] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:15.481 [2024-12-09 17:34:41.760582] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:15.481 [2024-12-09 17:34:41.825155] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:15.481 [2024-12-09 17:34:41.825664] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1c277a0:1 started. 00:24:15.481 [2024-12-09 17:34:41.827274] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:15.481 [2024-12-09 17:34:41.827297] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.481 [2024-12-09 17:34:41.831293] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1c277a0 was disconnected and freed. delete nvme_qpair. 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.481 request: 00:24:15.481 { 00:24:15.481 "name": "nvme", 00:24:15.481 "trtype": "tcp", 00:24:15.481 "traddr": "10.0.0.2", 00:24:15.481 "adrfam": "ipv4", 00:24:15.481 "trsvcid": "8009", 00:24:15.481 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:15.481 "wait_for_attach": true, 00:24:15.481 "method": "bdev_nvme_start_discovery", 00:24:15.481 "req_id": 1 00:24:15.481 } 00:24:15.481 Got JSON-RPC error response 00:24:15.481 response: 00:24:15.481 { 00:24:15.481 "code": -17, 00:24:15.481 "message": "File exists" 00:24:15.481 } 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.481 request: 00:24:15.481 { 00:24:15.481 "name": "nvme_second", 00:24:15.481 "trtype": "tcp", 00:24:15.481 "traddr": "10.0.0.2", 00:24:15.481 "adrfam": "ipv4", 00:24:15.481 "trsvcid": "8009", 00:24:15.481 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:15.481 "wait_for_attach": true, 00:24:15.481 "method": "bdev_nvme_start_discovery", 00:24:15.481 "req_id": 1 00:24:15.481 } 00:24:15.481 Got JSON-RPC error response 00:24:15.481 response: 00:24:15.481 { 00:24:15.481 "code": -17, 00:24:15.481 "message": "File exists" 00:24:15.481 } 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:15.481 17:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.481 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:15.481 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:15.481 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.481 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:15.481 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.481 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:15.481 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.481 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:15.740 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.740 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:15.740 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:15.740 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:15.740 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:15.740 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.740 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.740 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.740 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.740 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:15.741 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.741 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.677 [2024-12-09 17:34:43.071122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.677 [2024-12-09 17:34:43.071148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0c430 with addr=10.0.0.2, port=8010 00:24:16.677 [2024-12-09 17:34:43.071160] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:16.677 [2024-12-09 17:34:43.071170] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:16.677 [2024-12-09 17:34:43.071176] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:17.613 [2024-12-09 17:34:44.073647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.614 [2024-12-09 17:34:44.073669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0c430 with addr=10.0.0.2, port=8010 00:24:17.614 [2024-12-09 17:34:44.073681] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:17.614 [2024-12-09 17:34:44.073687] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:17.614 [2024-12-09 17:34:44.073693] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:18.551 [2024-12-09 17:34:45.075779] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:18.551 request: 00:24:18.551 { 00:24:18.551 "name": "nvme_second", 00:24:18.551 "trtype": "tcp", 00:24:18.551 "traddr": "10.0.0.2", 00:24:18.551 "adrfam": "ipv4", 00:24:18.551 "trsvcid": "8010", 00:24:18.551 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:18.551 "wait_for_attach": false, 00:24:18.551 "attach_timeout_ms": 3000, 00:24:18.551 "method": "bdev_nvme_start_discovery", 00:24:18.551 "req_id": 1 00:24:18.551 } 00:24:18.551 Got JSON-RPC error response 00:24:18.551 response: 00:24:18.551 { 00:24:18.551 "code": -110, 00:24:18.551 "message": "Connection timed out" 00:24:18.551 } 00:24:18.551 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:18.551 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:18.551 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:18.551 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:18.551 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:18.551 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:18.551 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:18.551 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:18.551 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.551 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:18.552 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.552 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2006747 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:18.811 rmmod nvme_tcp 00:24:18.811 rmmod nvme_fabrics 00:24:18.811 rmmod nvme_keyring 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2006614 ']' 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2006614 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2006614 ']' 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2006614 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2006614 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2006614' 00:24:18.811 killing process with pid 2006614 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2006614 00:24:18.811 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2006614 00:24:19.071 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:19.071 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:19.071 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:19.071 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:19.071 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:19.071 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:19.071 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:19.071 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:19.071 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:19.071 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.071 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.071 17:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.978 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:20.978 00:24:20.978 real 0m16.949s 00:24:20.978 user 0m20.107s 00:24:20.978 sys 0m5.789s 00:24:20.978 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.978 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.978 ************************************ 00:24:20.978 END TEST nvmf_host_discovery 00:24:20.978 ************************************ 00:24:20.978 17:34:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:20.978 17:34:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:20.978 17:34:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.978 17:34:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.237 ************************************ 00:24:21.237 START TEST nvmf_host_multipath_status 00:24:21.237 ************************************ 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:21.237 * Looking for test storage... 00:24:21.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:21.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.237 --rc genhtml_branch_coverage=1 00:24:21.237 --rc genhtml_function_coverage=1 00:24:21.237 --rc genhtml_legend=1 00:24:21.237 --rc geninfo_all_blocks=1 00:24:21.237 --rc geninfo_unexecuted_blocks=1 00:24:21.237 00:24:21.237 ' 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:21.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.237 --rc genhtml_branch_coverage=1 00:24:21.237 --rc genhtml_function_coverage=1 00:24:21.237 --rc genhtml_legend=1 00:24:21.237 --rc geninfo_all_blocks=1 00:24:21.237 --rc geninfo_unexecuted_blocks=1 00:24:21.237 00:24:21.237 ' 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:21.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.237 --rc genhtml_branch_coverage=1 00:24:21.237 --rc genhtml_function_coverage=1 00:24:21.237 --rc genhtml_legend=1 00:24:21.237 --rc geninfo_all_blocks=1 00:24:21.237 --rc geninfo_unexecuted_blocks=1 00:24:21.237 00:24:21.237 ' 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:21.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.237 --rc genhtml_branch_coverage=1 00:24:21.237 --rc genhtml_function_coverage=1 00:24:21.237 --rc genhtml_legend=1 00:24:21.237 --rc geninfo_all_blocks=1 00:24:21.237 --rc geninfo_unexecuted_blocks=1 00:24:21.237 00:24:21.237 ' 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:21.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:21.237 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:21.238 17:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:27.809 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:27.809 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:27.809 Found net devices under 0000:af:00.0: cvl_0_0 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:27.809 Found net devices under 0000:af:00.1: cvl_0_1 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.809 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:24:27.810 00:24:27.810 --- 10.0.0.2 ping statistics --- 00:24:27.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.810 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:24:27.810 00:24:27.810 --- 10.0.0.1 ping statistics --- 00:24:27.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.810 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2011720 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2011720 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2011720 ']' 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:27.810 [2024-12-09 17:34:53.697856] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:24:27.810 [2024-12-09 17:34:53.697902] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.810 [2024-12-09 17:34:53.776677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:27.810 [2024-12-09 17:34:53.815991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.810 [2024-12-09 17:34:53.816025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.810 [2024-12-09 17:34:53.816032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.810 [2024-12-09 17:34:53.816038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.810 [2024-12-09 17:34:53.816043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.810 [2024-12-09 17:34:53.817160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.810 [2024-12-09 17:34:53.817161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2011720 00:24:27.810 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:27.810 [2024-12-09 17:34:54.118219] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.810 17:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:27.810 Malloc0 00:24:28.069 17:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:28.069 17:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:28.328 17:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.587 [2024-12-09 17:34:54.949867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.587 17:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:28.844 [2024-12-09 17:34:55.150402] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:28.845 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2011976 00:24:28.845 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:28.845 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:28.845 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2011976 /var/tmp/bdevperf.sock 00:24:28.845 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2011976 ']' 00:24:28.845 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.845 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.845 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.845 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.845 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:29.103 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.103 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:29.103 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:29.361 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:29.619 Nvme0n1 00:24:29.620 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:29.878 Nvme0n1 00:24:29.878 17:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:29.878 17:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:32.409 17:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:32.409 17:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:32.409 17:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:32.409 17:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:33.345 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:33.345 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:33.345 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.345 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:33.603 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.603 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:33.603 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.603 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:33.862 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:33.862 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:33.862 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:33.862 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.120 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.120 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:34.120 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.120 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:34.378 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.378 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:34.378 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.378 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:34.378 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.378 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:34.378 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.378 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:34.637 17:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.637 17:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:34.637 17:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:34.896 17:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:35.154 17:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:36.089 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:36.089 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:36.089 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.089 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:36.347 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:36.347 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:36.347 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.347 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:36.605 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.605 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:36.605 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.605 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:36.605 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.605 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:36.605 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.605 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:36.928 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.928 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:36.928 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.928 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:37.267 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.267 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:37.267 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.267 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:37.268 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.268 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:37.268 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:37.545 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:37.805 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:38.745 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:38.745 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:38.745 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.745 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:39.005 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.005 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:39.005 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.005 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:39.263 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:39.263 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:39.264 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.264 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:39.522 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.522 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:39.522 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.522 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:39.781 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.781 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:39.781 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.781 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:39.781 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.781 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:39.781 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.781 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:40.040 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.040 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:40.040 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:40.299 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:40.557 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:41.492 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:41.492 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:41.492 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.492 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:41.750 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.750 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:41.750 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.750 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:42.008 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:42.008 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:42.008 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.008 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:42.267 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.267 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:42.267 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:42.267 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.267 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.267 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:42.267 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.267 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:42.525 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.525 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:42.525 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.525 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:42.783 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:42.783 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:42.783 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:43.042 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:43.301 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:44.237 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:44.237 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:44.237 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.237 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:44.497 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.497 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:44.497 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.497 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.497 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.497 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.497 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.497 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:44.756 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.756 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:44.756 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.756 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:45.014 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.014 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:45.015 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.015 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:45.273 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:45.273 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:45.273 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.273 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.273 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:45.273 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:45.273 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:45.532 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:45.791 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:46.728 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:46.728 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:46.728 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.728 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:46.987 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.988 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:46.988 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:46.988 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.247 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.247 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:47.247 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.247 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:47.505 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.505 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:47.505 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.505 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:47.505 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.505 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:47.505 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.505 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:47.764 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:47.764 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:47.764 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.764 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:48.023 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.023 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:48.282 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:48.282 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:48.541 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:48.799 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:49.733 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:49.733 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:49.733 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.733 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:49.991 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.991 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:49.991 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.991 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:49.991 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.991 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:49.991 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.991 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.249 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.249 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:50.249 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.249 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:50.508 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.508 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:50.508 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.508 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:50.766 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.766 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:50.766 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.766 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:51.025 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.025 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:51.025 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:51.025 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:51.284 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:52.221 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:52.222 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:52.222 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.222 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:52.481 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.481 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:52.481 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.481 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:52.740 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.740 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:52.740 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.740 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:52.999 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.999 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:52.999 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:52.999 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.259 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.259 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:53.259 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.259 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:53.518 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.518 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:53.518 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:53.518 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.518 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.518 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:53.518 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:53.777 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:54.036 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:54.972 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:54.972 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:54.972 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.972 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:55.231 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.231 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:55.231 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.231 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:55.490 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.490 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:55.490 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.490 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:55.748 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.748 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:55.748 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.748 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:55.748 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.748 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:55.748 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.748 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:56.007 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.007 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:56.007 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.007 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:56.265 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.265 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:56.265 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:56.524 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:56.782 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:57.716 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:57.716 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:57.716 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:57.716 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.974 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.974 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:57.974 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.974 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:58.232 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:58.232 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:58.232 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:58.232 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.490 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.490 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:58.490 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.490 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:58.490 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.490 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:58.490 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.490 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:58.748 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.748 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:58.748 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.748 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:59.005 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.005 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2011976 00:24:59.005 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2011976 ']' 00:24:59.005 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2011976 00:24:59.005 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:59.005 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.005 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2011976 00:24:59.005 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:59.005 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:59.005 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2011976' 00:24:59.005 killing process with pid 2011976 00:24:59.005 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2011976 00:24:59.005 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2011976 00:24:59.005 { 00:24:59.005 "results": [ 00:24:59.005 { 00:24:59.005 "job": "Nvme0n1", 00:24:59.005 "core_mask": "0x4", 00:24:59.005 "workload": "verify", 00:24:59.005 "status": "terminated", 00:24:59.005 "verify_range": { 00:24:59.005 "start": 0, 00:24:59.005 "length": 16384 00:24:59.005 }, 00:24:59.005 "queue_depth": 128, 00:24:59.005 "io_size": 4096, 00:24:59.005 "runtime": 28.966584, 00:24:59.005 "iops": 10654.483801058488, 00:24:59.005 "mibps": 41.61907734788472, 00:24:59.005 "io_failed": 0, 00:24:59.005 "io_timeout": 0, 00:24:59.005 "avg_latency_us": 11993.975690265115, 00:24:59.005 "min_latency_us": 225.28, 00:24:59.005 "max_latency_us": 3019898.88 00:24:59.005 } 00:24:59.005 ], 00:24:59.005 "core_count": 1 00:24:59.005 } 00:24:59.267 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2011976 00:24:59.267 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:59.267 [2024-12-09 17:34:55.228656] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:24:59.267 [2024-12-09 17:34:55.228710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011976 ] 00:24:59.267 [2024-12-09 17:34:55.304519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.267 [2024-12-09 17:34:55.343906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.267 Running I/O for 90 seconds... 00:24:59.267 11257.00 IOPS, 43.97 MiB/s [2024-12-09T16:35:25.807Z] 11339.50 IOPS, 44.29 MiB/s [2024-12-09T16:35:25.807Z] 11403.00 IOPS, 44.54 MiB/s [2024-12-09T16:35:25.807Z] 11358.50 IOPS, 44.37 MiB/s [2024-12-09T16:35:25.807Z] 11403.00 IOPS, 44.54 MiB/s [2024-12-09T16:35:25.807Z] 11410.17 IOPS, 44.57 MiB/s [2024-12-09T16:35:25.807Z] 11410.00 IOPS, 44.57 MiB/s [2024-12-09T16:35:25.807Z] 11391.88 IOPS, 44.50 MiB/s [2024-12-09T16:35:25.807Z] 11392.22 IOPS, 44.50 MiB/s [2024-12-09T16:35:25.807Z] 11399.60 IOPS, 44.53 MiB/s [2024-12-09T16:35:25.807Z] 11411.73 IOPS, 44.58 MiB/s [2024-12-09T16:35:25.807Z] 11416.58 IOPS, 44.60 MiB/s [2024-12-09T16:35:25.807Z] [2024-12-09 17:35:09.396179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.396682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.267 [2024-12-09 17:35:09.396703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.267 [2024-12-09 17:35:09.396723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.396735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.267 [2024-12-09 17:35:09.396742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.397043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.267 [2024-12-09 17:35:09.397058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.397073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.397080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.397094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.397101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.397115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.397122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.397135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.397142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.397155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.397162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.397183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.397190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.397203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.397210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:59.267 [2024-12-09 17:35:09.397223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.267 [2024-12-09 17:35:09.397230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.397981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.397988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.398002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.398010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.398024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.398031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:59.268 [2024-12-09 17:35:09.398128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.268 [2024-12-09 17:35:09.398137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.398981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.398999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.399007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.399024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.399032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.399049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.399056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.399073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.399079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.399097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.399104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:59.269 [2024-12-09 17:35:09.399121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.269 [2024-12-09 17:35:09.399127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:09.399154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:09.399183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:09.399207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:09.399231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:09.399255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:09.399279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:09.399302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.270 [2024-12-09 17:35:09.399328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.270 [2024-12-09 17:35:09.399352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.270 [2024-12-09 17:35:09.399377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.270 [2024-12-09 17:35:09.399401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.270 [2024-12-09 17:35:09.399424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.270 [2024-12-09 17:35:09.399449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:09.399466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.270 [2024-12-09 17:35:09.399473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:59.270 11337.23 IOPS, 44.29 MiB/s [2024-12-09T16:35:25.810Z] 10527.43 IOPS, 41.12 MiB/s [2024-12-09T16:35:25.810Z] 9825.60 IOPS, 38.38 MiB/s [2024-12-09T16:35:25.810Z] 9276.75 IOPS, 36.24 MiB/s [2024-12-09T16:35:25.810Z] 9415.06 IOPS, 36.78 MiB/s [2024-12-09T16:35:25.810Z] 9532.00 IOPS, 37.23 MiB/s [2024-12-09T16:35:25.810Z] 9700.79 IOPS, 37.89 MiB/s [2024-12-09T16:35:25.810Z] 9901.95 IOPS, 38.68 MiB/s [2024-12-09T16:35:25.810Z] 10076.95 IOPS, 39.36 MiB/s [2024-12-09T16:35:25.810Z] 10153.27 IOPS, 39.66 MiB/s [2024-12-09T16:35:25.810Z] 10211.35 IOPS, 39.89 MiB/s [2024-12-09T16:35:25.810Z] 10264.92 IOPS, 40.10 MiB/s [2024-12-09T16:35:25.810Z] 10394.48 IOPS, 40.60 MiB/s [2024-12-09T16:35:25.810Z] 10521.15 IOPS, 41.10 MiB/s [2024-12-09T16:35:25.810Z] [2024-12-09 17:35:23.087630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.087673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.087707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.087716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.087729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.087737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.087749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.087762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.087774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.087781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.087793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.087800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.087814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.087820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.087834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.270 [2024-12-09 17:35:23.087841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.270 [2024-12-09 17:35:23.088478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:59.270 [2024-12-09 17:35:23.088490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.088839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.088846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.089162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.089189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.089209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.089229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.089248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.089269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.089287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.089307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.089326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.089345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.271 [2024-12-09 17:35:23.089367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.271 [2024-12-09 17:35:23.089386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.271 [2024-12-09 17:35:23.089405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:59.271 [2024-12-09 17:35:23.089417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.271 [2024-12-09 17:35:23.089425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:59.272 [2024-12-09 17:35:23.089437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.272 [2024-12-09 17:35:23.089444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:59.272 [2024-12-09 17:35:23.089456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.272 [2024-12-09 17:35:23.089462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:59.272 [2024-12-09 17:35:23.089474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.272 [2024-12-09 17:35:23.089482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:59.272 [2024-12-09 17:35:23.089494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.272 [2024-12-09 17:35:23.089501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:59.272 [2024-12-09 17:35:23.089513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.272 [2024-12-09 17:35:23.089522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:59.272 [2024-12-09 17:35:23.089534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.272 [2024-12-09 17:35:23.089542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:59.272 [2024-12-09 17:35:23.089554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.272 [2024-12-09 17:35:23.089561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:59.272 [2024-12-09 17:35:23.089573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.272 [2024-12-09 17:35:23.089580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:59.272 [2024-12-09 17:35:23.089593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.272 [2024-12-09 17:35:23.089600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:59.272 [2024-12-09 17:35:23.089614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.272 [2024-12-09 17:35:23.089621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:59.272 [2024-12-09 17:35:23.089633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.272 [2024-12-09 17:35:23.089641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:59.272 [2024-12-09 17:35:23.089654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.272 [2024-12-09 17:35:23.089661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.272 10600.70 IOPS, 41.41 MiB/s [2024-12-09T16:35:25.812Z] 10631.32 IOPS, 41.53 MiB/s [2024-12-09T16:35:25.812Z] Received shutdown signal, test time was about 28.967227 seconds 00:24:59.272 00:24:59.272 Latency(us) 00:24:59.272 [2024-12-09T16:35:25.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.272 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:59.272 Verification LBA range: start 0x0 length 0x4000 00:24:59.272 Nvme0n1 : 28.97 10654.48 41.62 0.00 0.00 11993.98 225.28 3019898.88 00:24:59.272 [2024-12-09T16:35:25.812Z] =================================================================================================================== 00:24:59.272 [2024-12-09T16:35:25.812Z] Total : 10654.48 41.62 0.00 0.00 11993.98 225.28 3019898.88 00:24:59.272 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.530 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:59.530 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:59.530 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.531 rmmod nvme_tcp 00:24:59.531 rmmod nvme_fabrics 00:24:59.531 rmmod nvme_keyring 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2011720 ']' 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2011720 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2011720 ']' 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2011720 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2011720 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2011720' 00:24:59.531 killing process with pid 2011720 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2011720 00:24:59.531 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2011720 00:24:59.789 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:59.789 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:59.789 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:59.789 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:59.789 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:59.789 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:59.789 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:59.789 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:59.789 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:59.789 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.789 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.789 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.695 17:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:01.695 00:25:01.695 real 0m40.691s 00:25:01.695 user 1m50.417s 00:25:01.695 sys 0m11.619s 00:25:01.695 17:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.695 17:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:01.695 ************************************ 00:25:01.695 END TEST nvmf_host_multipath_status 00:25:01.695 ************************************ 00:25:01.954 17:35:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:01.954 17:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:01.954 17:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.955 ************************************ 00:25:01.955 START TEST nvmf_discovery_remove_ifc 00:25:01.955 ************************************ 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:01.955 * Looking for test storage... 00:25:01.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:01.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.955 --rc genhtml_branch_coverage=1 00:25:01.955 --rc genhtml_function_coverage=1 00:25:01.955 --rc genhtml_legend=1 00:25:01.955 --rc geninfo_all_blocks=1 00:25:01.955 --rc geninfo_unexecuted_blocks=1 00:25:01.955 00:25:01.955 ' 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:01.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.955 --rc genhtml_branch_coverage=1 00:25:01.955 --rc genhtml_function_coverage=1 00:25:01.955 --rc genhtml_legend=1 00:25:01.955 --rc geninfo_all_blocks=1 00:25:01.955 --rc geninfo_unexecuted_blocks=1 00:25:01.955 00:25:01.955 ' 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:01.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.955 --rc genhtml_branch_coverage=1 00:25:01.955 --rc genhtml_function_coverage=1 00:25:01.955 --rc genhtml_legend=1 00:25:01.955 --rc geninfo_all_blocks=1 00:25:01.955 --rc geninfo_unexecuted_blocks=1 00:25:01.955 00:25:01.955 ' 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:01.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.955 --rc genhtml_branch_coverage=1 00:25:01.955 --rc genhtml_function_coverage=1 00:25:01.955 --rc genhtml_legend=1 00:25:01.955 --rc geninfo_all_blocks=1 00:25:01.955 --rc geninfo_unexecuted_blocks=1 00:25:01.955 00:25:01.955 ' 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.955 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:02.215 17:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:08.788 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:08.788 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:08.788 Found net devices under 0000:af:00.0: cvl_0_0 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:08.788 Found net devices under 0000:af:00.1: cvl_0_1 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:25:08.788 00:25:08.788 --- 10.0.0.2 ping statistics --- 00:25:08.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.788 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:25:08.788 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:25:08.789 00:25:08.789 --- 10.0.0.1 ping statistics --- 00:25:08.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.789 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2020546 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2020546 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2020546 ']' 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.789 [2024-12-09 17:35:34.535264] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:25:08.789 [2024-12-09 17:35:34.535307] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.789 [2024-12-09 17:35:34.595417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.789 [2024-12-09 17:35:34.635191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.789 [2024-12-09 17:35:34.635229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.789 [2024-12-09 17:35:34.635237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.789 [2024-12-09 17:35:34.635244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.789 [2024-12-09 17:35:34.635249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.789 [2024-12-09 17:35:34.635724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.789 [2024-12-09 17:35:34.787622] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.789 [2024-12-09 17:35:34.795808] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:08.789 null0 00:25:08.789 [2024-12-09 17:35:34.827783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2020566 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2020566 /tmp/host.sock 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2020566 ']' 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:08.789 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.789 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.789 [2024-12-09 17:35:34.895207] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:25:08.789 [2024-12-09 17:35:34.895248] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020566 ] 00:25:08.789 [2024-12-09 17:35:34.966405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.789 [2024-12-09 17:35:35.005763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.789 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.726 [2024-12-09 17:35:36.195316] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:09.726 [2024-12-09 17:35:36.195333] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:09.726 [2024-12-09 17:35:36.195349] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:09.984 [2024-12-09 17:35:36.281609] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:09.984 [2024-12-09 17:35:36.464519] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:09.984 [2024-12-09 17:35:36.465285] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2437a90:1 started. 00:25:09.984 [2024-12-09 17:35:36.466622] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:09.984 [2024-12-09 17:35:36.466662] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:09.984 [2024-12-09 17:35:36.466681] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:09.985 [2024-12-09 17:35:36.466695] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:09.985 [2024-12-09 17:35:36.466711] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.985 [2024-12-09 17:35:36.473268] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2437a90 was disconnected and freed. delete nvme_qpair. 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:09.985 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:10.244 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:10.244 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:10.244 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.244 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:10.244 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.244 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:10.244 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.244 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:10.244 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.244 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:10.244 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:11.180 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:11.180 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.180 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:11.180 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.180 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:11.180 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.180 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:11.180 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.438 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:11.438 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:12.376 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:12.376 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.376 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:12.376 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:12.376 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.376 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.376 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:12.376 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.376 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:12.376 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:13.312 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:13.313 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.313 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:13.313 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.313 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:13.313 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.313 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:13.313 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.313 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:13.313 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:14.691 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.691 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.691 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.691 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.691 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.691 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.691 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.691 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.691 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:14.691 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:15.628 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:15.628 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.628 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:15.628 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.628 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:15.628 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.628 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:15.628 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.628 [2024-12-09 17:35:41.908125] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:15.628 [2024-12-09 17:35:41.908162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.628 [2024-12-09 17:35:41.908177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.628 [2024-12-09 17:35:41.908186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.628 [2024-12-09 17:35:41.908193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.628 [2024-12-09 17:35:41.908200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.628 [2024-12-09 17:35:41.908206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.628 [2024-12-09 17:35:41.908213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.628 [2024-12-09 17:35:41.908219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.628 [2024-12-09 17:35:41.908226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.628 [2024-12-09 17:35:41.908232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.628 [2024-12-09 17:35:41.908239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24142b0 is same with the state(6) to be set 00:25:15.628 [2024-12-09 17:35:41.918147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24142b0 (9): Bad file descriptor 00:25:15.628 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:15.628 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:15.628 [2024-12-09 17:35:41.928185] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:15.628 [2024-12-09 17:35:41.928197] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:15.628 [2024-12-09 17:35:41.928204] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:15.628 [2024-12-09 17:35:41.928209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:15.628 [2024-12-09 17:35:41.928229] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:16.565 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:16.565 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.565 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:16.565 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.565 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:16.565 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.565 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:16.565 [2024-12-09 17:35:42.952202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:16.565 [2024-12-09 17:35:42.952273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24142b0 with addr=10.0.0.2, port=4420 00:25:16.565 [2024-12-09 17:35:42.952303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24142b0 is same with the state(6) to be set 00:25:16.565 [2024-12-09 17:35:42.952352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24142b0 (9): Bad file descriptor 00:25:16.565 [2024-12-09 17:35:42.953296] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:16.565 [2024-12-09 17:35:42.953358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:16.565 [2024-12-09 17:35:42.953382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:16.565 [2024-12-09 17:35:42.953404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:16.565 [2024-12-09 17:35:42.953425] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:16.565 [2024-12-09 17:35:42.953440] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:16.565 [2024-12-09 17:35:42.953453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:16.565 [2024-12-09 17:35:42.953474] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:16.565 [2024-12-09 17:35:42.953489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:16.565 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.565 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:16.565 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:17.502 [2024-12-09 17:35:43.955997] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:17.502 [2024-12-09 17:35:43.956016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:17.502 [2024-12-09 17:35:43.956027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:17.502 [2024-12-09 17:35:43.956034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:17.502 [2024-12-09 17:35:43.956041] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:17.502 [2024-12-09 17:35:43.956047] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:17.502 [2024-12-09 17:35:43.956051] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:17.502 [2024-12-09 17:35:43.956055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:17.502 [2024-12-09 17:35:43.956074] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:17.502 [2024-12-09 17:35:43.956092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.502 [2024-12-09 17:35:43.956100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.502 [2024-12-09 17:35:43.956109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.502 [2024-12-09 17:35:43.956119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.502 [2024-12-09 17:35:43.956126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.502 [2024-12-09 17:35:43.956133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.502 [2024-12-09 17:35:43.956140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.502 [2024-12-09 17:35:43.956146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.502 [2024-12-09 17:35:43.956153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.502 [2024-12-09 17:35:43.956160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.502 [2024-12-09 17:35:43.956171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:17.502 [2024-12-09 17:35:43.956476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24039a0 (9): Bad file descriptor 00:25:17.502 [2024-12-09 17:35:43.957488] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:17.502 [2024-12-09 17:35:43.957499] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:17.502 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.502 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.502 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.502 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.502 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.502 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.502 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.502 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.502 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:17.502 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.503 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.761 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:17.761 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.761 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.761 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.761 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.761 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.761 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.761 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.761 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.761 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:17.761 17:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:18.700 17:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.700 17:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.700 17:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.700 17:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.700 17:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.700 17:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.700 17:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.700 17:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.700 17:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:18.700 17:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.638 [2024-12-09 17:35:46.009559] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:19.639 [2024-12-09 17:35:46.009576] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:19.639 [2024-12-09 17:35:46.009588] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:19.639 [2024-12-09 17:35:46.136966] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:19.897 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.897 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.897 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.897 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.897 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.897 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.897 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.898 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.898 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:19.898 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.898 [2024-12-09 17:35:46.359031] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:19.898 [2024-12-09 17:35:46.359571] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x241eaa0:1 started. 00:25:19.898 [2024-12-09 17:35:46.360573] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:19.898 [2024-12-09 17:35:46.360602] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:19.898 [2024-12-09 17:35:46.360618] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:19.898 [2024-12-09 17:35:46.360630] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:19.898 [2024-12-09 17:35:46.360637] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:19.898 [2024-12-09 17:35:46.368466] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x241eaa0 was disconnected and freed. delete nvme_qpair. 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2020566 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2020566 ']' 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2020566 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2020566 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2020566' 00:25:20.834 killing process with pid 2020566 00:25:20.834 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2020566 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2020566 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:21.093 rmmod nvme_tcp 00:25:21.093 rmmod nvme_fabrics 00:25:21.093 rmmod nvme_keyring 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2020546 ']' 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2020546 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2020546 ']' 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2020546 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2020546 00:25:21.093 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2020546' 00:25:21.352 killing process with pid 2020546 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2020546 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2020546 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.352 17:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.384 17:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:23.384 00:25:23.384 real 0m21.561s 00:25:23.384 user 0m26.850s 00:25:23.384 sys 0m5.817s 00:25:23.384 17:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.384 17:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.384 ************************************ 00:25:23.384 END TEST nvmf_discovery_remove_ifc 00:25:23.384 ************************************ 00:25:23.384 17:35:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:23.384 17:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:23.384 17:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.384 17:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.643 ************************************ 00:25:23.643 START TEST nvmf_identify_kernel_target 00:25:23.643 ************************************ 00:25:23.643 17:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:23.643 * Looking for test storage... 00:25:23.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.643 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:23.643 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:23.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.644 --rc genhtml_branch_coverage=1 00:25:23.644 --rc genhtml_function_coverage=1 00:25:23.644 --rc genhtml_legend=1 00:25:23.644 --rc geninfo_all_blocks=1 00:25:23.644 --rc geninfo_unexecuted_blocks=1 00:25:23.644 00:25:23.644 ' 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:23.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.644 --rc genhtml_branch_coverage=1 00:25:23.644 --rc genhtml_function_coverage=1 00:25:23.644 --rc genhtml_legend=1 00:25:23.644 --rc geninfo_all_blocks=1 00:25:23.644 --rc geninfo_unexecuted_blocks=1 00:25:23.644 00:25:23.644 ' 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:23.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.644 --rc genhtml_branch_coverage=1 00:25:23.644 --rc genhtml_function_coverage=1 00:25:23.644 --rc genhtml_legend=1 00:25:23.644 --rc geninfo_all_blocks=1 00:25:23.644 --rc geninfo_unexecuted_blocks=1 00:25:23.644 00:25:23.644 ' 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:23.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.644 --rc genhtml_branch_coverage=1 00:25:23.644 --rc genhtml_function_coverage=1 00:25:23.644 --rc genhtml_legend=1 00:25:23.644 --rc geninfo_all_blocks=1 00:25:23.644 --rc geninfo_unexecuted_blocks=1 00:25:23.644 00:25:23.644 ' 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.644 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:23.645 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:23.645 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:23.645 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.645 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.645 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.645 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:23.645 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:23.645 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:23.645 17:35:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:30.216 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:30.216 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:30.216 Found net devices under 0000:af:00.0: cvl_0_0 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:30.216 Found net devices under 0000:af:00.1: cvl_0_1 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.216 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.217 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:30.217 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:30.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:25:30.217 00:25:30.217 --- 10.0.0.2 ping statistics --- 00:25:30.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.217 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:25:30.217 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:25:30.217 00:25:30.217 --- 10.0.0.1 ping statistics --- 00:25:30.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.217 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:30.217 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.217 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:30.217 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:30.217 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.217 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:30.217 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:30.217 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.217 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:30.217 17:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:30.217 17:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:32.754 Waiting for block devices as requested 00:25:32.754 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:32.754 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:32.754 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:32.754 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:32.754 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:32.754 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:32.754 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:33.014 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:33.014 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:33.014 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:33.273 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:33.273 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:33.273 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:33.533 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:33.533 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:33.533 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:33.792 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:33.792 No valid GPT data, bailing 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:33.792 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:34.057 00:25:34.057 Discovery Log Number of Records 2, Generation counter 2 00:25:34.057 =====Discovery Log Entry 0====== 00:25:34.057 trtype: tcp 00:25:34.057 adrfam: ipv4 00:25:34.057 subtype: current discovery subsystem 00:25:34.057 treq: not specified, sq flow control disable supported 00:25:34.057 portid: 1 00:25:34.057 trsvcid: 4420 00:25:34.057 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:34.057 traddr: 10.0.0.1 00:25:34.057 eflags: none 00:25:34.057 sectype: none 00:25:34.057 =====Discovery Log Entry 1====== 00:25:34.057 trtype: tcp 00:25:34.057 adrfam: ipv4 00:25:34.057 subtype: nvme subsystem 00:25:34.057 treq: not specified, sq flow control disable supported 00:25:34.057 portid: 1 00:25:34.057 trsvcid: 4420 00:25:34.057 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:34.057 traddr: 10.0.0.1 00:25:34.057 eflags: none 00:25:34.057 sectype: none 00:25:34.057 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:34.057 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:34.057 ===================================================== 00:25:34.057 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:34.057 ===================================================== 00:25:34.057 Controller Capabilities/Features 00:25:34.057 ================================ 00:25:34.057 Vendor ID: 0000 00:25:34.057 Subsystem Vendor ID: 0000 00:25:34.058 Serial Number: 00ac4920e3499dcb798a 00:25:34.058 Model Number: Linux 00:25:34.058 Firmware Version: 6.8.9-20 00:25:34.058 Recommended Arb Burst: 0 00:25:34.058 IEEE OUI Identifier: 00 00 00 00:25:34.058 Multi-path I/O 00:25:34.058 May have multiple subsystem ports: No 00:25:34.058 May have multiple controllers: No 00:25:34.058 Associated with SR-IOV VF: No 00:25:34.058 Max Data Transfer Size: Unlimited 00:25:34.058 Max Number of Namespaces: 0 00:25:34.058 Max Number of I/O Queues: 1024 00:25:34.058 NVMe Specification Version (VS): 1.3 00:25:34.058 NVMe Specification Version (Identify): 1.3 00:25:34.058 Maximum Queue Entries: 1024 00:25:34.058 Contiguous Queues Required: No 00:25:34.058 Arbitration Mechanisms Supported 00:25:34.058 Weighted Round Robin: Not Supported 00:25:34.058 Vendor Specific: Not Supported 00:25:34.058 Reset Timeout: 7500 ms 00:25:34.058 Doorbell Stride: 4 bytes 00:25:34.058 NVM Subsystem Reset: Not Supported 00:25:34.058 Command Sets Supported 00:25:34.058 NVM Command Set: Supported 00:25:34.058 Boot Partition: Not Supported 00:25:34.058 Memory Page Size Minimum: 4096 bytes 00:25:34.058 Memory Page Size Maximum: 4096 bytes 00:25:34.058 Persistent Memory Region: Not Supported 00:25:34.058 Optional Asynchronous Events Supported 00:25:34.058 Namespace Attribute Notices: Not Supported 00:25:34.058 Firmware Activation Notices: Not Supported 00:25:34.058 ANA Change Notices: Not Supported 00:25:34.058 PLE Aggregate Log Change Notices: Not Supported 00:25:34.058 LBA Status Info Alert Notices: Not Supported 00:25:34.058 EGE Aggregate Log Change Notices: Not Supported 00:25:34.058 Normal NVM Subsystem Shutdown event: Not Supported 00:25:34.058 Zone Descriptor Change Notices: Not Supported 00:25:34.058 Discovery Log Change Notices: Supported 00:25:34.058 Controller Attributes 00:25:34.058 128-bit Host Identifier: Not Supported 00:25:34.058 Non-Operational Permissive Mode: Not Supported 00:25:34.058 NVM Sets: Not Supported 00:25:34.058 Read Recovery Levels: Not Supported 00:25:34.058 Endurance Groups: Not Supported 00:25:34.058 Predictable Latency Mode: Not Supported 00:25:34.058 Traffic Based Keep ALive: Not Supported 00:25:34.058 Namespace Granularity: Not Supported 00:25:34.058 SQ Associations: Not Supported 00:25:34.058 UUID List: Not Supported 00:25:34.058 Multi-Domain Subsystem: Not Supported 00:25:34.058 Fixed Capacity Management: Not Supported 00:25:34.058 Variable Capacity Management: Not Supported 00:25:34.058 Delete Endurance Group: Not Supported 00:25:34.058 Delete NVM Set: Not Supported 00:25:34.058 Extended LBA Formats Supported: Not Supported 00:25:34.058 Flexible Data Placement Supported: Not Supported 00:25:34.058 00:25:34.058 Controller Memory Buffer Support 00:25:34.058 ================================ 00:25:34.058 Supported: No 00:25:34.058 00:25:34.058 Persistent Memory Region Support 00:25:34.058 ================================ 00:25:34.058 Supported: No 00:25:34.058 00:25:34.058 Admin Command Set Attributes 00:25:34.058 ============================ 00:25:34.058 Security Send/Receive: Not Supported 00:25:34.058 Format NVM: Not Supported 00:25:34.058 Firmware Activate/Download: Not Supported 00:25:34.058 Namespace Management: Not Supported 00:25:34.058 Device Self-Test: Not Supported 00:25:34.058 Directives: Not Supported 00:25:34.058 NVMe-MI: Not Supported 00:25:34.058 Virtualization Management: Not Supported 00:25:34.058 Doorbell Buffer Config: Not Supported 00:25:34.058 Get LBA Status Capability: Not Supported 00:25:34.058 Command & Feature Lockdown Capability: Not Supported 00:25:34.058 Abort Command Limit: 1 00:25:34.058 Async Event Request Limit: 1 00:25:34.058 Number of Firmware Slots: N/A 00:25:34.058 Firmware Slot 1 Read-Only: N/A 00:25:34.058 Firmware Activation Without Reset: N/A 00:25:34.058 Multiple Update Detection Support: N/A 00:25:34.058 Firmware Update Granularity: No Information Provided 00:25:34.058 Per-Namespace SMART Log: No 00:25:34.058 Asymmetric Namespace Access Log Page: Not Supported 00:25:34.058 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:34.058 Command Effects Log Page: Not Supported 00:25:34.058 Get Log Page Extended Data: Supported 00:25:34.058 Telemetry Log Pages: Not Supported 00:25:34.058 Persistent Event Log Pages: Not Supported 00:25:34.058 Supported Log Pages Log Page: May Support 00:25:34.058 Commands Supported & Effects Log Page: Not Supported 00:25:34.058 Feature Identifiers & Effects Log Page:May Support 00:25:34.058 NVMe-MI Commands & Effects Log Page: May Support 00:25:34.058 Data Area 4 for Telemetry Log: Not Supported 00:25:34.058 Error Log Page Entries Supported: 1 00:25:34.058 Keep Alive: Not Supported 00:25:34.058 00:25:34.058 NVM Command Set Attributes 00:25:34.058 ========================== 00:25:34.058 Submission Queue Entry Size 00:25:34.058 Max: 1 00:25:34.058 Min: 1 00:25:34.058 Completion Queue Entry Size 00:25:34.058 Max: 1 00:25:34.058 Min: 1 00:25:34.058 Number of Namespaces: 0 00:25:34.058 Compare Command: Not Supported 00:25:34.058 Write Uncorrectable Command: Not Supported 00:25:34.058 Dataset Management Command: Not Supported 00:25:34.058 Write Zeroes Command: Not Supported 00:25:34.058 Set Features Save Field: Not Supported 00:25:34.058 Reservations: Not Supported 00:25:34.058 Timestamp: Not Supported 00:25:34.058 Copy: Not Supported 00:25:34.058 Volatile Write Cache: Not Present 00:25:34.058 Atomic Write Unit (Normal): 1 00:25:34.058 Atomic Write Unit (PFail): 1 00:25:34.058 Atomic Compare & Write Unit: 1 00:25:34.058 Fused Compare & Write: Not Supported 00:25:34.058 Scatter-Gather List 00:25:34.058 SGL Command Set: Supported 00:25:34.058 SGL Keyed: Not Supported 00:25:34.058 SGL Bit Bucket Descriptor: Not Supported 00:25:34.058 SGL Metadata Pointer: Not Supported 00:25:34.058 Oversized SGL: Not Supported 00:25:34.058 SGL Metadata Address: Not Supported 00:25:34.058 SGL Offset: Supported 00:25:34.058 Transport SGL Data Block: Not Supported 00:25:34.058 Replay Protected Memory Block: Not Supported 00:25:34.058 00:25:34.058 Firmware Slot Information 00:25:34.058 ========================= 00:25:34.058 Active slot: 0 00:25:34.058 00:25:34.058 00:25:34.058 Error Log 00:25:34.058 ========= 00:25:34.058 00:25:34.058 Active Namespaces 00:25:34.058 ================= 00:25:34.058 Discovery Log Page 00:25:34.058 ================== 00:25:34.058 Generation Counter: 2 00:25:34.058 Number of Records: 2 00:25:34.058 Record Format: 0 00:25:34.058 00:25:34.058 Discovery Log Entry 0 00:25:34.058 ---------------------- 00:25:34.058 Transport Type: 3 (TCP) 00:25:34.058 Address Family: 1 (IPv4) 00:25:34.058 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:34.058 Entry Flags: 00:25:34.058 Duplicate Returned Information: 0 00:25:34.058 Explicit Persistent Connection Support for Discovery: 0 00:25:34.058 Transport Requirements: 00:25:34.058 Secure Channel: Not Specified 00:25:34.058 Port ID: 1 (0x0001) 00:25:34.058 Controller ID: 65535 (0xffff) 00:25:34.058 Admin Max SQ Size: 32 00:25:34.058 Transport Service Identifier: 4420 00:25:34.058 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:34.058 Transport Address: 10.0.0.1 00:25:34.058 Discovery Log Entry 1 00:25:34.058 ---------------------- 00:25:34.058 Transport Type: 3 (TCP) 00:25:34.058 Address Family: 1 (IPv4) 00:25:34.058 Subsystem Type: 2 (NVM Subsystem) 00:25:34.058 Entry Flags: 00:25:34.058 Duplicate Returned Information: 0 00:25:34.058 Explicit Persistent Connection Support for Discovery: 0 00:25:34.058 Transport Requirements: 00:25:34.058 Secure Channel: Not Specified 00:25:34.058 Port ID: 1 (0x0001) 00:25:34.058 Controller ID: 65535 (0xffff) 00:25:34.058 Admin Max SQ Size: 32 00:25:34.058 Transport Service Identifier: 4420 00:25:34.058 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:34.058 Transport Address: 10.0.0.1 00:25:34.058 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:34.058 get_feature(0x01) failed 00:25:34.058 get_feature(0x02) failed 00:25:34.058 get_feature(0x04) failed 00:25:34.058 ===================================================== 00:25:34.058 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:34.058 ===================================================== 00:25:34.058 Controller Capabilities/Features 00:25:34.058 ================================ 00:25:34.058 Vendor ID: 0000 00:25:34.058 Subsystem Vendor ID: 0000 00:25:34.058 Serial Number: bb13b3c0b682a78d7863 00:25:34.058 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:34.058 Firmware Version: 6.8.9-20 00:25:34.058 Recommended Arb Burst: 6 00:25:34.058 IEEE OUI Identifier: 00 00 00 00:25:34.058 Multi-path I/O 00:25:34.059 May have multiple subsystem ports: Yes 00:25:34.059 May have multiple controllers: Yes 00:25:34.059 Associated with SR-IOV VF: No 00:25:34.059 Max Data Transfer Size: Unlimited 00:25:34.059 Max Number of Namespaces: 1024 00:25:34.059 Max Number of I/O Queues: 128 00:25:34.059 NVMe Specification Version (VS): 1.3 00:25:34.059 NVMe Specification Version (Identify): 1.3 00:25:34.059 Maximum Queue Entries: 1024 00:25:34.059 Contiguous Queues Required: No 00:25:34.059 Arbitration Mechanisms Supported 00:25:34.059 Weighted Round Robin: Not Supported 00:25:34.059 Vendor Specific: Not Supported 00:25:34.059 Reset Timeout: 7500 ms 00:25:34.059 Doorbell Stride: 4 bytes 00:25:34.059 NVM Subsystem Reset: Not Supported 00:25:34.059 Command Sets Supported 00:25:34.059 NVM Command Set: Supported 00:25:34.059 Boot Partition: Not Supported 00:25:34.059 Memory Page Size Minimum: 4096 bytes 00:25:34.059 Memory Page Size Maximum: 4096 bytes 00:25:34.059 Persistent Memory Region: Not Supported 00:25:34.059 Optional Asynchronous Events Supported 00:25:34.059 Namespace Attribute Notices: Supported 00:25:34.059 Firmware Activation Notices: Not Supported 00:25:34.059 ANA Change Notices: Supported 00:25:34.059 PLE Aggregate Log Change Notices: Not Supported 00:25:34.059 LBA Status Info Alert Notices: Not Supported 00:25:34.059 EGE Aggregate Log Change Notices: Not Supported 00:25:34.059 Normal NVM Subsystem Shutdown event: Not Supported 00:25:34.059 Zone Descriptor Change Notices: Not Supported 00:25:34.059 Discovery Log Change Notices: Not Supported 00:25:34.059 Controller Attributes 00:25:34.059 128-bit Host Identifier: Supported 00:25:34.059 Non-Operational Permissive Mode: Not Supported 00:25:34.059 NVM Sets: Not Supported 00:25:34.059 Read Recovery Levels: Not Supported 00:25:34.059 Endurance Groups: Not Supported 00:25:34.059 Predictable Latency Mode: Not Supported 00:25:34.059 Traffic Based Keep ALive: Supported 00:25:34.059 Namespace Granularity: Not Supported 00:25:34.059 SQ Associations: Not Supported 00:25:34.059 UUID List: Not Supported 00:25:34.059 Multi-Domain Subsystem: Not Supported 00:25:34.059 Fixed Capacity Management: Not Supported 00:25:34.059 Variable Capacity Management: Not Supported 00:25:34.059 Delete Endurance Group: Not Supported 00:25:34.059 Delete NVM Set: Not Supported 00:25:34.059 Extended LBA Formats Supported: Not Supported 00:25:34.059 Flexible Data Placement Supported: Not Supported 00:25:34.059 00:25:34.059 Controller Memory Buffer Support 00:25:34.059 ================================ 00:25:34.059 Supported: No 00:25:34.059 00:25:34.059 Persistent Memory Region Support 00:25:34.059 ================================ 00:25:34.059 Supported: No 00:25:34.059 00:25:34.059 Admin Command Set Attributes 00:25:34.059 ============================ 00:25:34.059 Security Send/Receive: Not Supported 00:25:34.059 Format NVM: Not Supported 00:25:34.059 Firmware Activate/Download: Not Supported 00:25:34.059 Namespace Management: Not Supported 00:25:34.059 Device Self-Test: Not Supported 00:25:34.059 Directives: Not Supported 00:25:34.059 NVMe-MI: Not Supported 00:25:34.059 Virtualization Management: Not Supported 00:25:34.059 Doorbell Buffer Config: Not Supported 00:25:34.059 Get LBA Status Capability: Not Supported 00:25:34.059 Command & Feature Lockdown Capability: Not Supported 00:25:34.059 Abort Command Limit: 4 00:25:34.059 Async Event Request Limit: 4 00:25:34.059 Number of Firmware Slots: N/A 00:25:34.059 Firmware Slot 1 Read-Only: N/A 00:25:34.059 Firmware Activation Without Reset: N/A 00:25:34.059 Multiple Update Detection Support: N/A 00:25:34.059 Firmware Update Granularity: No Information Provided 00:25:34.059 Per-Namespace SMART Log: Yes 00:25:34.059 Asymmetric Namespace Access Log Page: Supported 00:25:34.059 ANA Transition Time : 10 sec 00:25:34.059 00:25:34.059 Asymmetric Namespace Access Capabilities 00:25:34.059 ANA Optimized State : Supported 00:25:34.059 ANA Non-Optimized State : Supported 00:25:34.059 ANA Inaccessible State : Supported 00:25:34.059 ANA Persistent Loss State : Supported 00:25:34.059 ANA Change State : Supported 00:25:34.059 ANAGRPID is not changed : No 00:25:34.059 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:34.059 00:25:34.059 ANA Group Identifier Maximum : 128 00:25:34.059 Number of ANA Group Identifiers : 128 00:25:34.059 Max Number of Allowed Namespaces : 1024 00:25:34.059 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:34.059 Command Effects Log Page: Supported 00:25:34.059 Get Log Page Extended Data: Supported 00:25:34.059 Telemetry Log Pages: Not Supported 00:25:34.059 Persistent Event Log Pages: Not Supported 00:25:34.059 Supported Log Pages Log Page: May Support 00:25:34.059 Commands Supported & Effects Log Page: Not Supported 00:25:34.059 Feature Identifiers & Effects Log Page:May Support 00:25:34.059 NVMe-MI Commands & Effects Log Page: May Support 00:25:34.059 Data Area 4 for Telemetry Log: Not Supported 00:25:34.059 Error Log Page Entries Supported: 128 00:25:34.059 Keep Alive: Supported 00:25:34.059 Keep Alive Granularity: 1000 ms 00:25:34.059 00:25:34.059 NVM Command Set Attributes 00:25:34.059 ========================== 00:25:34.059 Submission Queue Entry Size 00:25:34.059 Max: 64 00:25:34.059 Min: 64 00:25:34.059 Completion Queue Entry Size 00:25:34.059 Max: 16 00:25:34.059 Min: 16 00:25:34.059 Number of Namespaces: 1024 00:25:34.059 Compare Command: Not Supported 00:25:34.059 Write Uncorrectable Command: Not Supported 00:25:34.059 Dataset Management Command: Supported 00:25:34.059 Write Zeroes Command: Supported 00:25:34.059 Set Features Save Field: Not Supported 00:25:34.059 Reservations: Not Supported 00:25:34.059 Timestamp: Not Supported 00:25:34.059 Copy: Not Supported 00:25:34.059 Volatile Write Cache: Present 00:25:34.059 Atomic Write Unit (Normal): 1 00:25:34.059 Atomic Write Unit (PFail): 1 00:25:34.059 Atomic Compare & Write Unit: 1 00:25:34.059 Fused Compare & Write: Not Supported 00:25:34.059 Scatter-Gather List 00:25:34.059 SGL Command Set: Supported 00:25:34.059 SGL Keyed: Not Supported 00:25:34.059 SGL Bit Bucket Descriptor: Not Supported 00:25:34.059 SGL Metadata Pointer: Not Supported 00:25:34.059 Oversized SGL: Not Supported 00:25:34.059 SGL Metadata Address: Not Supported 00:25:34.059 SGL Offset: Supported 00:25:34.059 Transport SGL Data Block: Not Supported 00:25:34.059 Replay Protected Memory Block: Not Supported 00:25:34.059 00:25:34.059 Firmware Slot Information 00:25:34.059 ========================= 00:25:34.059 Active slot: 0 00:25:34.059 00:25:34.059 Asymmetric Namespace Access 00:25:34.059 =========================== 00:25:34.059 Change Count : 0 00:25:34.059 Number of ANA Group Descriptors : 1 00:25:34.059 ANA Group Descriptor : 0 00:25:34.059 ANA Group ID : 1 00:25:34.059 Number of NSID Values : 1 00:25:34.059 Change Count : 0 00:25:34.059 ANA State : 1 00:25:34.059 Namespace Identifier : 1 00:25:34.059 00:25:34.059 Commands Supported and Effects 00:25:34.059 ============================== 00:25:34.059 Admin Commands 00:25:34.059 -------------- 00:25:34.059 Get Log Page (02h): Supported 00:25:34.059 Identify (06h): Supported 00:25:34.059 Abort (08h): Supported 00:25:34.059 Set Features (09h): Supported 00:25:34.059 Get Features (0Ah): Supported 00:25:34.059 Asynchronous Event Request (0Ch): Supported 00:25:34.059 Keep Alive (18h): Supported 00:25:34.059 I/O Commands 00:25:34.059 ------------ 00:25:34.059 Flush (00h): Supported 00:25:34.059 Write (01h): Supported LBA-Change 00:25:34.059 Read (02h): Supported 00:25:34.059 Write Zeroes (08h): Supported LBA-Change 00:25:34.059 Dataset Management (09h): Supported 00:25:34.059 00:25:34.059 Error Log 00:25:34.059 ========= 00:25:34.059 Entry: 0 00:25:34.059 Error Count: 0x3 00:25:34.059 Submission Queue Id: 0x0 00:25:34.059 Command Id: 0x5 00:25:34.059 Phase Bit: 0 00:25:34.059 Status Code: 0x2 00:25:34.059 Status Code Type: 0x0 00:25:34.059 Do Not Retry: 1 00:25:34.059 Error Location: 0x28 00:25:34.059 LBA: 0x0 00:25:34.059 Namespace: 0x0 00:25:34.059 Vendor Log Page: 0x0 00:25:34.059 ----------- 00:25:34.059 Entry: 1 00:25:34.059 Error Count: 0x2 00:25:34.059 Submission Queue Id: 0x0 00:25:34.059 Command Id: 0x5 00:25:34.059 Phase Bit: 0 00:25:34.059 Status Code: 0x2 00:25:34.059 Status Code Type: 0x0 00:25:34.059 Do Not Retry: 1 00:25:34.059 Error Location: 0x28 00:25:34.059 LBA: 0x0 00:25:34.059 Namespace: 0x0 00:25:34.059 Vendor Log Page: 0x0 00:25:34.059 ----------- 00:25:34.059 Entry: 2 00:25:34.059 Error Count: 0x1 00:25:34.059 Submission Queue Id: 0x0 00:25:34.059 Command Id: 0x4 00:25:34.059 Phase Bit: 0 00:25:34.059 Status Code: 0x2 00:25:34.059 Status Code Type: 0x0 00:25:34.059 Do Not Retry: 1 00:25:34.060 Error Location: 0x28 00:25:34.060 LBA: 0x0 00:25:34.060 Namespace: 0x0 00:25:34.060 Vendor Log Page: 0x0 00:25:34.060 00:25:34.060 Number of Queues 00:25:34.060 ================ 00:25:34.060 Number of I/O Submission Queues: 128 00:25:34.060 Number of I/O Completion Queues: 128 00:25:34.060 00:25:34.060 ZNS Specific Controller Data 00:25:34.060 ============================ 00:25:34.060 Zone Append Size Limit: 0 00:25:34.060 00:25:34.060 00:25:34.060 Active Namespaces 00:25:34.060 ================= 00:25:34.060 get_feature(0x05) failed 00:25:34.060 Namespace ID:1 00:25:34.060 Command Set Identifier: NVM (00h) 00:25:34.060 Deallocate: Supported 00:25:34.060 Deallocated/Unwritten Error: Not Supported 00:25:34.060 Deallocated Read Value: Unknown 00:25:34.060 Deallocate in Write Zeroes: Not Supported 00:25:34.060 Deallocated Guard Field: 0xFFFF 00:25:34.060 Flush: Supported 00:25:34.060 Reservation: Not Supported 00:25:34.060 Namespace Sharing Capabilities: Multiple Controllers 00:25:34.060 Size (in LBAs): 1953525168 (931GiB) 00:25:34.060 Capacity (in LBAs): 1953525168 (931GiB) 00:25:34.060 Utilization (in LBAs): 1953525168 (931GiB) 00:25:34.060 UUID: 47cdeca5-001e-4185-8901-2187e0a5df75 00:25:34.060 Thin Provisioning: Not Supported 00:25:34.060 Per-NS Atomic Units: Yes 00:25:34.060 Atomic Boundary Size (Normal): 0 00:25:34.060 Atomic Boundary Size (PFail): 0 00:25:34.060 Atomic Boundary Offset: 0 00:25:34.060 NGUID/EUI64 Never Reused: No 00:25:34.060 ANA group ID: 1 00:25:34.060 Namespace Write Protected: No 00:25:34.060 Number of LBA Formats: 1 00:25:34.060 Current LBA Format: LBA Format #00 00:25:34.060 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:34.060 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:34.060 rmmod nvme_tcp 00:25:34.060 rmmod nvme_fabrics 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.060 17:36:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.595 17:36:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:36.595 17:36:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:36.595 17:36:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:36.595 17:36:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:36.595 17:36:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:36.595 17:36:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:36.595 17:36:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:36.595 17:36:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:36.595 17:36:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:36.595 17:36:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:36.595 17:36:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:39.132 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:39.132 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:40.071 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:40.071 00:25:40.071 real 0m16.648s 00:25:40.071 user 0m4.284s 00:25:40.071 sys 0m8.718s 00:25:40.071 17:36:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:40.071 17:36:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.071 ************************************ 00:25:40.071 END TEST nvmf_identify_kernel_target 00:25:40.071 ************************************ 00:25:40.330 17:36:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:40.330 17:36:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:40.330 17:36:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:40.330 17:36:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.330 ************************************ 00:25:40.330 START TEST nvmf_auth_host 00:25:40.330 ************************************ 00:25:40.330 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:40.330 * Looking for test storage... 00:25:40.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:40.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.331 --rc genhtml_branch_coverage=1 00:25:40.331 --rc genhtml_function_coverage=1 00:25:40.331 --rc genhtml_legend=1 00:25:40.331 --rc geninfo_all_blocks=1 00:25:40.331 --rc geninfo_unexecuted_blocks=1 00:25:40.331 00:25:40.331 ' 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:40.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.331 --rc genhtml_branch_coverage=1 00:25:40.331 --rc genhtml_function_coverage=1 00:25:40.331 --rc genhtml_legend=1 00:25:40.331 --rc geninfo_all_blocks=1 00:25:40.331 --rc geninfo_unexecuted_blocks=1 00:25:40.331 00:25:40.331 ' 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:40.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.331 --rc genhtml_branch_coverage=1 00:25:40.331 --rc genhtml_function_coverage=1 00:25:40.331 --rc genhtml_legend=1 00:25:40.331 --rc geninfo_all_blocks=1 00:25:40.331 --rc geninfo_unexecuted_blocks=1 00:25:40.331 00:25:40.331 ' 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:40.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.331 --rc genhtml_branch_coverage=1 00:25:40.331 --rc genhtml_function_coverage=1 00:25:40.331 --rc genhtml_legend=1 00:25:40.331 --rc geninfo_all_blocks=1 00:25:40.331 --rc geninfo_unexecuted_blocks=1 00:25:40.331 00:25:40.331 ' 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:40.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:40.331 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:40.332 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:40.332 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:40.332 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:40.332 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.332 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:40.332 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:40.332 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:40.332 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.332 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.332 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.590 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:40.590 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:40.590 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:40.590 17:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:47.166 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:47.166 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:47.166 Found net devices under 0000:af:00.0: cvl_0_0 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:47.166 Found net devices under 0000:af:00.1: cvl_0_1 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:47.166 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:47.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:25:47.167 00:25:47.167 --- 10.0.0.2 ping statistics --- 00:25:47.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.167 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:25:47.167 00:25:47.167 --- 10.0.0.1 ping statistics --- 00:25:47.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.167 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2033073 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2033073 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2033073 ']' 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:47.167 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ae9a58ae6728bf1690f65e43edcc45e9 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.m0c 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ae9a58ae6728bf1690f65e43edcc45e9 0 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ae9a58ae6728bf1690f65e43edcc45e9 0 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ae9a58ae6728bf1690f65e43edcc45e9 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.m0c 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.m0c 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.m0c 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3befd607bb616d099d6b6f2b4a4cc346d8650cc300542fe37571d7a3ac77a96c 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DP9 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3befd607bb616d099d6b6f2b4a4cc346d8650cc300542fe37571d7a3ac77a96c 3 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3befd607bb616d099d6b6f2b4a4cc346d8650cc300542fe37571d7a3ac77a96c 3 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3befd607bb616d099d6b6f2b4a4cc346d8650cc300542fe37571d7a3ac77a96c 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DP9 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DP9 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.DP9 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2839098d3b4b546104e1b8a786503bf215e4322c510c69ea 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.LOB 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2839098d3b4b546104e1b8a786503bf215e4322c510c69ea 0 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2839098d3b4b546104e1b8a786503bf215e4322c510c69ea 0 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2839098d3b4b546104e1b8a786503bf215e4322c510c69ea 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.LOB 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.LOB 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.LOB 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:47.167 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0e5d1e2772f4b013adca105fa9a4aaca40d5116edfb702be 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Aax 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0e5d1e2772f4b013adca105fa9a4aaca40d5116edfb702be 2 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0e5d1e2772f4b013adca105fa9a4aaca40d5116edfb702be 2 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0e5d1e2772f4b013adca105fa9a4aaca40d5116edfb702be 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Aax 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Aax 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Aax 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=93c116254cadbf6d351a235bbffb2313 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.hkT 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 93c116254cadbf6d351a235bbffb2313 1 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 93c116254cadbf6d351a235bbffb2313 1 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=93c116254cadbf6d351a235bbffb2313 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.hkT 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.hkT 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.hkT 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=89d07db23f9e76f87496a7109987c120 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZfP 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 89d07db23f9e76f87496a7109987c120 1 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 89d07db23f9e76f87496a7109987c120 1 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=89d07db23f9e76f87496a7109987c120 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZfP 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZfP 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ZfP 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=576da8a12479d13923659be745a12e333546b0385d3df36f 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.nsA 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 576da8a12479d13923659be745a12e333546b0385d3df36f 2 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 576da8a12479d13923659be745a12e333546b0385d3df36f 2 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=576da8a12479d13923659be745a12e333546b0385d3df36f 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.nsA 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.nsA 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.nsA 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=493abc801a55c97eaf3fde4861d90b40 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kHf 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 493abc801a55c97eaf3fde4861d90b40 0 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 493abc801a55c97eaf3fde4861d90b40 0 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=493abc801a55c97eaf3fde4861d90b40 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kHf 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kHf 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.kHf 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=50685daa1c4590d66a5914b34175a4535491ede938382654ed5babee11544bd7 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ELe 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 50685daa1c4590d66a5914b34175a4535491ede938382654ed5babee11544bd7 3 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 50685daa1c4590d66a5914b34175a4535491ede938382654ed5babee11544bd7 3 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=50685daa1c4590d66a5914b34175a4535491ede938382654ed5babee11544bd7 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:47.168 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ELe 00:25:47.169 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ELe 00:25:47.169 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ELe 00:25:47.169 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:47.169 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2033073 00:25:47.169 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2033073 ']' 00:25:47.169 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.169 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.169 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.169 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.169 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.428 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.428 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:47.428 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:47.428 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.m0c 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.DP9 ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DP9 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.LOB 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Aax ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Aax 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.hkT 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ZfP ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZfP 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.nsA 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.kHf ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.kHf 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ELe 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:47.429 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:49.964 Waiting for block devices as requested 00:25:49.964 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:50.222 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:50.222 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:50.222 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:50.481 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:50.481 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:50.481 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:50.481 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:50.740 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:50.740 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:50.740 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:50.999 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:50.999 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:50.999 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:50.999 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:51.258 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:51.258 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:51.831 No valid GPT data, bailing 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:51.831 00:25:51.831 Discovery Log Number of Records 2, Generation counter 2 00:25:51.831 =====Discovery Log Entry 0====== 00:25:51.831 trtype: tcp 00:25:51.831 adrfam: ipv4 00:25:51.831 subtype: current discovery subsystem 00:25:51.831 treq: not specified, sq flow control disable supported 00:25:51.831 portid: 1 00:25:51.831 trsvcid: 4420 00:25:51.831 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:51.831 traddr: 10.0.0.1 00:25:51.831 eflags: none 00:25:51.831 sectype: none 00:25:51.831 =====Discovery Log Entry 1====== 00:25:51.831 trtype: tcp 00:25:51.831 adrfam: ipv4 00:25:51.831 subtype: nvme subsystem 00:25:51.831 treq: not specified, sq flow control disable supported 00:25:51.831 portid: 1 00:25:51.831 trsvcid: 4420 00:25:51.831 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:51.831 traddr: 10.0.0.1 00:25:51.831 eflags: none 00:25:51.831 sectype: none 00:25:51.831 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.091 nvme0n1 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.091 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.092 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.092 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.092 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.092 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:52.092 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.092 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.092 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.351 nvme0n1 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.351 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.610 nvme0n1 00:25:52.610 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.610 17:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.610 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.611 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.870 nvme0n1 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.870 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.129 nvme0n1 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.129 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.388 nvme0n1 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.389 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.648 nvme0n1 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:53.648 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.648 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.907 nvme0n1 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.907 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.908 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.908 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.908 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.908 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.908 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.908 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.908 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.908 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.908 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.167 nvme0n1 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.167 nvme0n1 00:25:54.167 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.426 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.427 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.427 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.427 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.427 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.427 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.427 nvme0n1 00:25:54.427 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.427 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.427 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.427 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.427 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.686 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.686 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.686 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.686 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.686 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.686 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.945 nvme0n1 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.945 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.204 nvme0n1 00:25:55.204 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.204 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.204 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.204 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.204 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.204 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.204 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.204 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.204 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.204 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.204 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.205 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.464 nvme0n1 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.464 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.723 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.723 nvme0n1 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.982 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.241 nvme0n1 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:25:56.241 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.242 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.809 nvme0n1 00:25:56.809 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.809 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.809 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.809 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.809 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.809 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.809 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.809 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.810 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.069 nvme0n1 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.069 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.637 nvme0n1 00:25:57.637 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.637 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.637 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.637 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.637 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.637 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.637 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.637 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.637 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.637 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.637 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.638 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.896 nvme0n1 00:25:57.896 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.896 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.896 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.896 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.896 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.896 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.155 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.156 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.156 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.156 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.156 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.156 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.156 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.156 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.156 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.156 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.156 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:58.156 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.156 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.415 nvme0n1 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.415 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.982 nvme0n1 00:25:58.982 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.982 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.982 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.982 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.982 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.240 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.807 nvme0n1 00:25:59.807 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.807 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.807 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.807 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.807 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.807 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.807 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.807 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.807 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.807 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.808 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.375 nvme0n1 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.375 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.942 nvme0n1 00:26:00.942 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.942 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.942 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.942 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.942 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.942 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.201 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.767 nvme0n1 00:26:01.767 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.767 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.767 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.767 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.767 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.767 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.767 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.768 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.026 nvme0n1 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.026 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.027 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.027 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.027 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.027 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.027 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.027 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.027 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.027 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.027 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.285 nvme0n1 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.285 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.286 nvme0n1 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.286 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.544 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.545 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.545 nvme0n1 00:26:02.545 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.545 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.545 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.545 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.545 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.545 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.545 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.545 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.545 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.545 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.545 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.804 nvme0n1 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:02.804 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.805 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.064 nvme0n1 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.064 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.323 nvme0n1 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.323 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.582 nvme0n1 00:26:03.582 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:03.582 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.583 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.842 nvme0n1 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.842 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.101 nvme0n1 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.101 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.102 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.361 nvme0n1 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.361 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.620 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.879 nvme0n1 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.879 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.138 nvme0n1 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.138 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.139 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.139 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.139 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.397 nvme0n1 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.398 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.657 nvme0n1 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.657 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.916 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.175 nvme0n1 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.175 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.743 nvme0n1 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.743 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.002 nvme0n1 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:07.002 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:07.003 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.261 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.262 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.262 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.262 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.521 nvme0n1 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.521 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.521 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.089 nvme0n1 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.089 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.661 nvme0n1 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.661 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.662 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.231 nvme0n1 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.231 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.232 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.232 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.232 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.798 nvme0n1 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.798 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.799 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.799 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.799 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.799 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.799 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.799 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.799 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.799 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.799 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.799 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.057 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.057 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.057 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.057 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.648 nvme0n1 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.648 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.286 nvme0n1 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.286 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.287 nvme0n1 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.287 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.548 nvme0n1 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.548 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.548 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.548 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.548 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.549 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.807 nvme0n1 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:11.807 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.808 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.066 nvme0n1 00:26:12.066 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.066 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.066 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.066 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.066 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.067 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.326 nvme0n1 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.326 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.585 nvme0n1 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.585 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.844 nvme0n1 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.844 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.845 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.845 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.845 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.845 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.845 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.845 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.845 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.104 nvme0n1 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.104 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.364 nvme0n1 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.364 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.623 nvme0n1 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.623 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.623 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.623 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.623 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.623 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.623 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.623 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.623 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.623 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.623 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.623 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.624 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.624 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.624 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.624 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.624 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.883 nvme0n1 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.883 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.142 nvme0n1 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.142 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.143 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.143 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.143 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.143 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.143 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.143 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.143 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.143 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.143 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.143 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.143 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.143 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.401 nvme0n1 00:26:14.401 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.401 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.401 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.401 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.401 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.401 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.401 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.401 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.401 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.401 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.660 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.919 nvme0n1 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.919 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.920 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.920 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.920 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.920 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.178 nvme0n1 00:26:15.178 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.179 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.746 nvme0n1 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.746 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.005 nvme0n1 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:16.005 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.006 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.265 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.265 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.265 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.524 nvme0n1 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.524 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.092 nvme0n1 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.092 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.093 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.351 nvme0n1 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU5YTU4YWU2NzI4YmYxNjkwZjY1ZTQzZWRjYzQ1ZTkB0grZ: 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: ]] 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlZmQ2MDdiYjYxNmQwOTlkNmI2ZjJiNGE0Y2MzNDZkODY1MGNjMzAwNTQyZmUzNzU3MWQ3YTNhYzc3YTk2Yw4QX+E=: 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.352 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.919 nvme0n1 00:26:17.919 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.919 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.919 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.919 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.919 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.919 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.177 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.745 nvme0n1 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.745 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.746 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.746 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.746 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.746 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.746 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.746 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.746 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.746 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.746 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.746 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.746 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.314 nvme0n1 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2ZGE4YTEyNDc5ZDEzOTIzNjU5YmU3NDVhMTJlMzMzNTQ2YjAzODVkM2RmMzZmxDmj5A==: 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: ]] 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDkzYWJjODAxYTU1Yzk3ZWFmM2ZkZTQ4NjFkOTBiNDC7B+FZ: 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.314 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.882 nvme0n1 00:26:19.882 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.882 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.882 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.882 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.882 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.882 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.882 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.882 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.882 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.882 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA2ODVkYWExYzQ1OTBkNjZhNTkxNGIzNDE3NWE0NTM1NDkxZWRlOTM4MzgyNjU0ZWQ1YmFiZWUxMTU0NGJkN1717tM=: 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.141 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.709 nvme0n1 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.709 request: 00:26:20.709 { 00:26:20.709 "name": "nvme0", 00:26:20.709 "trtype": "tcp", 00:26:20.709 "traddr": "10.0.0.1", 00:26:20.709 "adrfam": "ipv4", 00:26:20.709 "trsvcid": "4420", 00:26:20.709 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:20.709 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:20.709 "prchk_reftag": false, 00:26:20.709 "prchk_guard": false, 00:26:20.709 "hdgst": false, 00:26:20.709 "ddgst": false, 00:26:20.709 "allow_unrecognized_csi": false, 00:26:20.709 "method": "bdev_nvme_attach_controller", 00:26:20.709 "req_id": 1 00:26:20.709 } 00:26:20.709 Got JSON-RPC error response 00:26:20.709 response: 00:26:20.709 { 00:26:20.709 "code": -5, 00:26:20.709 "message": "Input/output error" 00:26:20.709 } 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:20.709 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.710 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:20.710 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.710 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:20.710 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.710 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.968 request: 00:26:20.968 { 00:26:20.968 "name": "nvme0", 00:26:20.968 "trtype": "tcp", 00:26:20.968 "traddr": "10.0.0.1", 00:26:20.968 "adrfam": "ipv4", 00:26:20.968 "trsvcid": "4420", 00:26:20.968 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:20.968 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:20.968 "prchk_reftag": false, 00:26:20.968 "prchk_guard": false, 00:26:20.968 "hdgst": false, 00:26:20.968 "ddgst": false, 00:26:20.968 "dhchap_key": "key2", 00:26:20.968 "allow_unrecognized_csi": false, 00:26:20.968 "method": "bdev_nvme_attach_controller", 00:26:20.968 "req_id": 1 00:26:20.968 } 00:26:20.968 Got JSON-RPC error response 00:26:20.968 response: 00:26:20.968 { 00:26:20.968 "code": -5, 00:26:20.968 "message": "Input/output error" 00:26:20.968 } 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.968 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.969 request: 00:26:20.969 { 00:26:20.969 "name": "nvme0", 00:26:20.969 "trtype": "tcp", 00:26:20.969 "traddr": "10.0.0.1", 00:26:20.969 "adrfam": "ipv4", 00:26:20.969 "trsvcid": "4420", 00:26:20.969 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:20.969 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:20.969 "prchk_reftag": false, 00:26:20.969 "prchk_guard": false, 00:26:20.969 "hdgst": false, 00:26:20.969 "ddgst": false, 00:26:20.969 "dhchap_key": "key1", 00:26:20.969 "dhchap_ctrlr_key": "ckey2", 00:26:20.969 "allow_unrecognized_csi": false, 00:26:20.969 "method": "bdev_nvme_attach_controller", 00:26:20.969 "req_id": 1 00:26:20.969 } 00:26:20.969 Got JSON-RPC error response 00:26:20.969 response: 00:26:20.969 { 00:26:20.969 "code": -5, 00:26:20.969 "message": "Input/output error" 00:26:20.969 } 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.969 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.228 nvme0n1 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.228 request: 00:26:21.228 { 00:26:21.228 "name": "nvme0", 00:26:21.228 "dhchap_key": "key1", 00:26:21.228 "dhchap_ctrlr_key": "ckey2", 00:26:21.228 "method": "bdev_nvme_set_keys", 00:26:21.228 "req_id": 1 00:26:21.228 } 00:26:21.228 Got JSON-RPC error response 00:26:21.228 response: 00:26:21.228 { 00:26:21.228 "code": -13, 00:26:21.228 "message": "Permission denied" 00:26:21.228 } 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:21.228 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:22.605 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.605 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:22.605 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.605 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.605 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.605 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:22.605 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzOTA5OGQzYjRiNTQ2MTA0ZTFiOGE3ODY1MDNiZjIxNWU0MzIyYzUxMGM2OWVhWi6RLw==: 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: ]] 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGU1ZDFlMjc3MmY0YjAxM2FkY2ExMDVmYTlhNGFhY2E0MGQ1MTE2ZWRmYjcwMmJlqJcLzw==: 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.542 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.542 nvme0n1 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNjMTE2MjU0Y2FkYmY2ZDM1MWEyMzViYmZmYjIzMTNWI2tK: 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: ]] 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODlkMDdkYjIzZjllNzZmODc0OTZhNzEwOTk4N2MxMjCVVqna: 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.542 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.542 request: 00:26:23.542 { 00:26:23.542 "name": "nvme0", 00:26:23.542 "dhchap_key": "key2", 00:26:23.542 "dhchap_ctrlr_key": "ckey1", 00:26:23.542 "method": "bdev_nvme_set_keys", 00:26:23.542 "req_id": 1 00:26:23.801 } 00:26:23.801 Got JSON-RPC error response 00:26:23.801 response: 00:26:23.801 { 00:26:23.801 "code": -13, 00:26:23.801 "message": "Permission denied" 00:26:23.801 } 00:26:23.801 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:23.801 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:23.801 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:23.801 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:23.801 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:23.801 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.801 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:23.801 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.801 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.801 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.801 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:23.801 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:24.737 rmmod nvme_tcp 00:26:24.737 rmmod nvme_fabrics 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2033073 ']' 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2033073 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2033073 ']' 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2033073 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.737 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2033073 00:26:24.996 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:24.996 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:24.996 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2033073' 00:26:24.996 killing process with pid 2033073 00:26:24.996 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2033073 00:26:24.996 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2033073 00:26:24.996 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:24.996 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:24.996 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:24.996 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:24.997 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:24.997 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:24.997 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:24.997 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:24.997 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:24.997 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.997 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.997 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:27.533 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:30.068 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:30.068 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:31.007 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:31.007 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.m0c /tmp/spdk.key-null.LOB /tmp/spdk.key-sha256.hkT /tmp/spdk.key-sha384.nsA /tmp/spdk.key-sha512.ELe /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:31.007 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:33.542 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:33.542 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:33.542 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:33.801 00:26:33.801 real 0m53.542s 00:26:33.801 user 0m48.304s 00:26:33.801 sys 0m12.602s 00:26:33.801 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.801 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.801 ************************************ 00:26:33.801 END TEST nvmf_auth_host 00:26:33.801 ************************************ 00:26:33.801 17:37:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:33.801 17:37:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:33.801 17:37:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:33.801 17:37:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.801 17:37:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.801 ************************************ 00:26:33.801 START TEST nvmf_digest 00:26:33.801 ************************************ 00:26:33.801 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:34.061 * Looking for test storage... 00:26:34.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:34.061 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:34.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.062 --rc genhtml_branch_coverage=1 00:26:34.062 --rc genhtml_function_coverage=1 00:26:34.062 --rc genhtml_legend=1 00:26:34.062 --rc geninfo_all_blocks=1 00:26:34.062 --rc geninfo_unexecuted_blocks=1 00:26:34.062 00:26:34.062 ' 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:34.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.062 --rc genhtml_branch_coverage=1 00:26:34.062 --rc genhtml_function_coverage=1 00:26:34.062 --rc genhtml_legend=1 00:26:34.062 --rc geninfo_all_blocks=1 00:26:34.062 --rc geninfo_unexecuted_blocks=1 00:26:34.062 00:26:34.062 ' 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:34.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.062 --rc genhtml_branch_coverage=1 00:26:34.062 --rc genhtml_function_coverage=1 00:26:34.062 --rc genhtml_legend=1 00:26:34.062 --rc geninfo_all_blocks=1 00:26:34.062 --rc geninfo_unexecuted_blocks=1 00:26:34.062 00:26:34.062 ' 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:34.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.062 --rc genhtml_branch_coverage=1 00:26:34.062 --rc genhtml_function_coverage=1 00:26:34.062 --rc genhtml_legend=1 00:26:34.062 --rc geninfo_all_blocks=1 00:26:34.062 --rc geninfo_unexecuted_blocks=1 00:26:34.062 00:26:34.062 ' 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:34.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:34.062 17:37:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:40.641 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:40.642 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:40.642 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:40.642 Found net devices under 0000:af:00.0: cvl_0_0 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:40.642 Found net devices under 0000:af:00.1: cvl_0_1 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:40.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:26:40.642 00:26:40.642 --- 10.0.0.2 ping statistics --- 00:26:40.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.642 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:26:40.642 00:26:40.642 --- 10.0.0.1 ping statistics --- 00:26:40.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.642 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.642 ************************************ 00:26:40.642 START TEST nvmf_digest_clean 00:26:40.642 ************************************ 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2046777 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2046777 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2046777 ']' 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.642 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.642 [2024-12-09 17:37:06.548975] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:26:40.642 [2024-12-09 17:37:06.549015] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.642 [2024-12-09 17:37:06.627202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.642 [2024-12-09 17:37:06.666034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.642 [2024-12-09 17:37:06.666068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.642 [2024-12-09 17:37:06.666075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.643 [2024-12-09 17:37:06.666081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.643 [2024-12-09 17:37:06.666086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.643 [2024-12-09 17:37:06.666572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.643 null0 00:26:40.643 [2024-12-09 17:37:06.822205] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.643 [2024-12-09 17:37:06.846384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2046802 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2046802 /var/tmp/bperf.sock 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2046802 ']' 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:40.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.643 17:37:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.643 [2024-12-09 17:37:06.898961] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:26:40.643 [2024-12-09 17:37:06.899002] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046802 ] 00:26:40.643 [2024-12-09 17:37:06.972178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.643 [2024-12-09 17:37:07.010850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.643 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.643 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:40.643 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:40.643 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:40.643 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:40.902 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.902 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:41.160 nvme0n1 00:26:41.160 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:41.160 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:41.418 Running I/O for 2 seconds... 00:26:43.287 25893.00 IOPS, 101.14 MiB/s [2024-12-09T16:37:09.827Z] 25566.00 IOPS, 99.87 MiB/s 00:26:43.287 Latency(us) 00:26:43.287 [2024-12-09T16:37:09.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.287 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:43.287 nvme0n1 : 2.00 25579.66 99.92 0.00 0.00 4999.40 2527.82 17476.27 00:26:43.287 [2024-12-09T16:37:09.827Z] =================================================================================================================== 00:26:43.287 [2024-12-09T16:37:09.827Z] Total : 25579.66 99.92 0.00 0.00 4999.40 2527.82 17476.27 00:26:43.287 { 00:26:43.287 "results": [ 00:26:43.287 { 00:26:43.287 "job": "nvme0n1", 00:26:43.287 "core_mask": "0x2", 00:26:43.287 "workload": "randread", 00:26:43.287 "status": "finished", 00:26:43.287 "queue_depth": 128, 00:26:43.287 "io_size": 4096, 00:26:43.287 "runtime": 2.003936, 00:26:43.287 "iops": 25579.65923063411, 00:26:43.287 "mibps": 99.9205438696645, 00:26:43.287 "io_failed": 0, 00:26:43.287 "io_timeout": 0, 00:26:43.287 "avg_latency_us": 4999.402125634023, 00:26:43.287 "min_latency_us": 2527.8171428571427, 00:26:43.287 "max_latency_us": 17476.266666666666 00:26:43.287 } 00:26:43.287 ], 00:26:43.287 "core_count": 1 00:26:43.287 } 00:26:43.287 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:43.287 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:43.287 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:43.287 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:43.287 | select(.opcode=="crc32c") 00:26:43.287 | "\(.module_name) \(.executed)"' 00:26:43.287 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:43.545 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:43.545 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:43.545 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:43.546 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:43.546 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2046802 00:26:43.546 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2046802 ']' 00:26:43.546 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2046802 00:26:43.546 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:43.546 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.546 17:37:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2046802 00:26:43.546 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:43.546 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:43.546 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2046802' 00:26:43.546 killing process with pid 2046802 00:26:43.546 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2046802 00:26:43.546 Received shutdown signal, test time was about 2.000000 seconds 00:26:43.546 00:26:43.546 Latency(us) 00:26:43.546 [2024-12-09T16:37:10.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.546 [2024-12-09T16:37:10.086Z] =================================================================================================================== 00:26:43.546 [2024-12-09T16:37:10.086Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:43.546 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2046802 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2047273 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2047273 /var/tmp/bperf.sock 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2047273 ']' 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:43.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.804 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:43.804 [2024-12-09 17:37:10.231756] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:26:43.804 [2024-12-09 17:37:10.231811] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047273 ] 00:26:43.804 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:43.804 Zero copy mechanism will not be used. 00:26:43.804 [2024-12-09 17:37:10.291492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.804 [2024-12-09 17:37:10.330456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.063 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.063 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:44.063 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:44.063 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:44.063 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:44.321 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.321 17:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.579 nvme0n1 00:26:44.579 17:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:44.579 17:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.837 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:44.837 Zero copy mechanism will not be used. 00:26:44.837 Running I/O for 2 seconds... 00:26:46.707 6098.00 IOPS, 762.25 MiB/s [2024-12-09T16:37:13.247Z] 6115.00 IOPS, 764.38 MiB/s 00:26:46.707 Latency(us) 00:26:46.707 [2024-12-09T16:37:13.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.707 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:46.707 nvme0n1 : 2.00 6115.24 764.40 0.00 0.00 2613.83 635.86 10423.34 00:26:46.707 [2024-12-09T16:37:13.247Z] =================================================================================================================== 00:26:46.707 [2024-12-09T16:37:13.247Z] Total : 6115.24 764.40 0.00 0.00 2613.83 635.86 10423.34 00:26:46.707 { 00:26:46.707 "results": [ 00:26:46.707 { 00:26:46.707 "job": "nvme0n1", 00:26:46.707 "core_mask": "0x2", 00:26:46.707 "workload": "randread", 00:26:46.707 "status": "finished", 00:26:46.707 "queue_depth": 16, 00:26:46.707 "io_size": 131072, 00:26:46.707 "runtime": 2.002538, 00:26:46.707 "iops": 6115.239760743616, 00:26:46.707 "mibps": 764.404970092952, 00:26:46.707 "io_failed": 0, 00:26:46.707 "io_timeout": 0, 00:26:46.707 "avg_latency_us": 2613.8311359977606, 00:26:46.707 "min_latency_us": 635.8552380952381, 00:26:46.707 "max_latency_us": 10423.344761904762 00:26:46.707 } 00:26:46.707 ], 00:26:46.707 "core_count": 1 00:26:46.707 } 00:26:46.707 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:46.708 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:46.708 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:46.708 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:46.708 | select(.opcode=="crc32c") 00:26:46.708 | "\(.module_name) \(.executed)"' 00:26:46.708 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2047273 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2047273 ']' 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2047273 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2047273 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2047273' 00:26:46.967 killing process with pid 2047273 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2047273 00:26:46.967 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.967 00:26:46.967 Latency(us) 00:26:46.967 [2024-12-09T16:37:13.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.967 [2024-12-09T16:37:13.507Z] =================================================================================================================== 00:26:46.967 [2024-12-09T16:37:13.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.967 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2047273 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2047936 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2047936 /var/tmp/bperf.sock 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2047936 ']' 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:47.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.226 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:47.226 [2024-12-09 17:37:13.628955] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:26:47.226 [2024-12-09 17:37:13.629004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047936 ] 00:26:47.226 [2024-12-09 17:37:13.700633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.226 [2024-12-09 17:37:13.736251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.484 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.484 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:47.484 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:47.484 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:47.484 17:37:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:47.743 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.743 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.001 nvme0n1 00:26:48.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:48.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:48.001 Running I/O for 2 seconds... 00:26:50.313 28675.00 IOPS, 112.01 MiB/s [2024-12-09T16:37:16.853Z] 28761.00 IOPS, 112.35 MiB/s 00:26:50.313 Latency(us) 00:26:50.313 [2024-12-09T16:37:16.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.313 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:50.313 nvme0n1 : 2.00 28750.98 112.31 0.00 0.00 4446.28 1786.64 8113.98 00:26:50.313 [2024-12-09T16:37:16.853Z] =================================================================================================================== 00:26:50.313 [2024-12-09T16:37:16.853Z] Total : 28750.98 112.31 0.00 0.00 4446.28 1786.64 8113.98 00:26:50.313 { 00:26:50.313 "results": [ 00:26:50.313 { 00:26:50.313 "job": "nvme0n1", 00:26:50.313 "core_mask": "0x2", 00:26:50.313 "workload": "randwrite", 00:26:50.313 "status": "finished", 00:26:50.313 "queue_depth": 128, 00:26:50.313 "io_size": 4096, 00:26:50.313 "runtime": 2.002332, 00:26:50.313 "iops": 28750.976361562418, 00:26:50.313 "mibps": 112.3085014123532, 00:26:50.313 "io_failed": 0, 00:26:50.313 "io_timeout": 0, 00:26:50.313 "avg_latency_us": 4446.281929312155, 00:26:50.313 "min_latency_us": 1786.6361904761904, 00:26:50.313 "max_latency_us": 8113.980952380953 00:26:50.313 } 00:26:50.313 ], 00:26:50.313 "core_count": 1 00:26:50.313 } 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:50.313 | select(.opcode=="crc32c") 00:26:50.313 | "\(.module_name) \(.executed)"' 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2047936 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2047936 ']' 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2047936 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2047936 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2047936' 00:26:50.313 killing process with pid 2047936 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2047936 00:26:50.313 Received shutdown signal, test time was about 2.000000 seconds 00:26:50.313 00:26:50.313 Latency(us) 00:26:50.313 [2024-12-09T16:37:16.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.313 [2024-12-09T16:37:16.853Z] =================================================================================================================== 00:26:50.313 [2024-12-09T16:37:16.853Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:50.313 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2047936 00:26:50.572 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:50.572 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:50.572 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:50.572 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:50.572 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:50.572 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:50.572 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:50.572 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2048398 00:26:50.573 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2048398 /var/tmp/bperf.sock 00:26:50.573 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:50.573 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2048398 ']' 00:26:50.573 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:50.573 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.573 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:50.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:50.573 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.573 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:50.573 [2024-12-09 17:37:16.912197] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:26:50.573 [2024-12-09 17:37:16.912243] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048398 ] 00:26:50.573 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:50.573 Zero copy mechanism will not be used. 00:26:50.573 [2024-12-09 17:37:16.984276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.573 [2024-12-09 17:37:17.024361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.573 17:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.573 17:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:50.573 17:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:50.573 17:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:50.573 17:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:50.832 17:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.832 17:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.090 nvme0n1 00:26:51.090 17:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:51.090 17:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:51.349 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:51.349 Zero copy mechanism will not be used. 00:26:51.349 Running I/O for 2 seconds... 00:26:53.222 6996.00 IOPS, 874.50 MiB/s [2024-12-09T16:37:19.762Z] 6899.00 IOPS, 862.38 MiB/s 00:26:53.222 Latency(us) 00:26:53.222 [2024-12-09T16:37:19.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.222 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:53.222 nvme0n1 : 2.00 6897.19 862.15 0.00 0.00 2315.76 1630.60 7708.28 00:26:53.222 [2024-12-09T16:37:19.762Z] =================================================================================================================== 00:26:53.222 [2024-12-09T16:37:19.762Z] Total : 6897.19 862.15 0.00 0.00 2315.76 1630.60 7708.28 00:26:53.222 { 00:26:53.222 "results": [ 00:26:53.222 { 00:26:53.222 "job": "nvme0n1", 00:26:53.222 "core_mask": "0x2", 00:26:53.222 "workload": "randwrite", 00:26:53.222 "status": "finished", 00:26:53.222 "queue_depth": 16, 00:26:53.222 "io_size": 131072, 00:26:53.222 "runtime": 2.003424, 00:26:53.222 "iops": 6897.192007283531, 00:26:53.222 "mibps": 862.1490009104414, 00:26:53.222 "io_failed": 0, 00:26:53.222 "io_timeout": 0, 00:26:53.222 "avg_latency_us": 2315.7617892466, 00:26:53.222 "min_latency_us": 1630.5980952380953, 00:26:53.222 "max_latency_us": 7708.281904761905 00:26:53.222 } 00:26:53.222 ], 00:26:53.222 "core_count": 1 00:26:53.222 } 00:26:53.222 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:53.222 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:53.222 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:53.222 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:53.222 | select(.opcode=="crc32c") 00:26:53.222 | "\(.module_name) \(.executed)"' 00:26:53.222 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2048398 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2048398 ']' 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2048398 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2048398 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2048398' 00:26:53.481 killing process with pid 2048398 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2048398 00:26:53.481 Received shutdown signal, test time was about 2.000000 seconds 00:26:53.481 00:26:53.481 Latency(us) 00:26:53.481 [2024-12-09T16:37:20.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.481 [2024-12-09T16:37:20.021Z] =================================================================================================================== 00:26:53.481 [2024-12-09T16:37:20.021Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:53.481 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2048398 00:26:53.740 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2046777 00:26:53.740 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2046777 ']' 00:26:53.740 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2046777 00:26:53.740 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:53.740 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.740 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2046777 00:26:53.740 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:53.740 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:53.740 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2046777' 00:26:53.740 killing process with pid 2046777 00:26:53.740 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2046777 00:26:53.740 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2046777 00:26:53.999 00:26:53.999 real 0m13.874s 00:26:53.999 user 0m26.568s 00:26:53.999 sys 0m4.633s 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:53.999 ************************************ 00:26:53.999 END TEST nvmf_digest_clean 00:26:53.999 ************************************ 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:53.999 ************************************ 00:26:53.999 START TEST nvmf_digest_error 00:26:53.999 ************************************ 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2049087 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2049087 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2049087 ']' 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.999 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.999 [2024-12-09 17:37:20.486934] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:26:53.999 [2024-12-09 17:37:20.486975] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.258 [2024-12-09 17:37:20.563581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.258 [2024-12-09 17:37:20.603213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.258 [2024-12-09 17:37:20.603248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.258 [2024-12-09 17:37:20.603256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.258 [2024-12-09 17:37:20.603263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.258 [2024-12-09 17:37:20.603268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.258 [2024-12-09 17:37:20.603756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.258 [2024-12-09 17:37:20.688249] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.258 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.258 null0 00:26:54.258 [2024-12-09 17:37:20.784498] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.516 [2024-12-09 17:37:20.808676] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.516 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.516 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:54.516 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:54.516 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:54.516 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:54.516 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:54.517 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2049119 00:26:54.517 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2049119 /var/tmp/bperf.sock 00:26:54.517 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:54.517 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2049119 ']' 00:26:54.517 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:54.517 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.517 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:54.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:54.517 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.517 17:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.517 [2024-12-09 17:37:20.862070] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:26:54.517 [2024-12-09 17:37:20.862111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049119 ] 00:26:54.517 [2024-12-09 17:37:20.936701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.517 [2024-12-09 17:37:20.980416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.775 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.775 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:54.775 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:54.775 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:54.775 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:54.775 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.775 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.776 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.776 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.776 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.034 nvme0n1 00:26:55.034 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:55.034 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.034 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:55.034 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.034 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:55.034 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:55.294 Running I/O for 2 seconds... 00:26:55.294 [2024-12-09 17:37:21.660337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.660373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.660384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.671429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.671457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.671466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.680216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.680241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.680250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.690281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.690304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.690312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.702040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.702062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.702071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.712084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.712107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.712120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.721702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.721724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.721733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.733794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.733817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.733826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.745307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.745329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.745337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.754563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.754585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.754593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.762806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.762828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.762836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.772394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.772417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.772426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.782379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.782400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.782408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.792762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.792783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.792790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.803710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.803735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.803743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.813063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.813084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.813092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.294 [2024-12-09 17:37:21.824957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.294 [2024-12-09 17:37:21.824978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-12-09 17:37:21.824986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.553 [2024-12-09 17:37:21.836971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.553 [2024-12-09 17:37:21.836992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.553 [2024-12-09 17:37:21.837000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.553 [2024-12-09 17:37:21.848143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.553 [2024-12-09 17:37:21.848164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.553 [2024-12-09 17:37:21.848177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.553 [2024-12-09 17:37:21.859171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.553 [2024-12-09 17:37:21.859191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.553 [2024-12-09 17:37:21.859199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.553 [2024-12-09 17:37:21.872368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.553 [2024-12-09 17:37:21.872390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.553 [2024-12-09 17:37:21.872398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.553 [2024-12-09 17:37:21.883101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.553 [2024-12-09 17:37:21.883121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.553 [2024-12-09 17:37:21.883129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.553 [2024-12-09 17:37:21.891587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.553 [2024-12-09 17:37:21.891608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.553 [2024-12-09 17:37:21.891616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.553 [2024-12-09 17:37:21.903674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.553 [2024-12-09 17:37:21.903696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.553 [2024-12-09 17:37:21.903704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.553 [2024-12-09 17:37:21.914683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.553 [2024-12-09 17:37:21.914704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:21.914712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:21.923887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:21.923908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:21.923917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:21.936230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:21.936253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:21.936261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:21.944431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:21.944453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:21.944461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:21.956104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:21.956125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:21.956133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:21.968605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:21.968626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:21.968634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:21.977726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:21.977747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:21.977756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:21.986507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:21.986528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:21.986539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:21.996811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:21.996831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:21.996839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:22.007208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:22.007230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:22.007238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:22.017163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:22.017189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:22.017197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:22.026270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:22.026291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:22.026300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:22.036621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:22.036641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:22.036650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:22.045525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:22.045546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:22.045554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:22.055717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:22.055738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:22.055747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:22.068042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:22.068063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:22.068071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:22.080134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:22.080155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:22.080164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.554 [2024-12-09 17:37:22.088122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.554 [2024-12-09 17:37:22.088143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.554 [2024-12-09 17:37:22.088152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.100119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.100140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.100149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.111138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.111159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.111172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.119620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.119642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.119650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.132126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.132148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.132156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.144507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.144528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.144536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.155371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.155392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.155400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.165277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.165298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.165310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.174714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.174735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.174744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.183544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.183564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.183573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.196806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.196827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.196835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.204848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.204868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.204876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.215937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.215958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.215967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.225685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.225705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.225714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.233942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.233963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.233971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.243746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.243766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.243774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.254531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.254555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.254563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.816 [2024-12-09 17:37:22.263009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.816 [2024-12-09 17:37:22.263030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.816 [2024-12-09 17:37:22.263039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.817 [2024-12-09 17:37:22.272460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.817 [2024-12-09 17:37:22.272481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.817 [2024-12-09 17:37:22.272490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.817 [2024-12-09 17:37:22.281514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.817 [2024-12-09 17:37:22.281535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.817 [2024-12-09 17:37:22.281543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.817 [2024-12-09 17:37:22.291397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.817 [2024-12-09 17:37:22.291418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.817 [2024-12-09 17:37:22.291426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.817 [2024-12-09 17:37:22.302427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.817 [2024-12-09 17:37:22.302448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.817 [2024-12-09 17:37:22.302456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.817 [2024-12-09 17:37:22.315563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.817 [2024-12-09 17:37:22.315583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.817 [2024-12-09 17:37:22.315592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.817 [2024-12-09 17:37:22.323661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.817 [2024-12-09 17:37:22.323681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.817 [2024-12-09 17:37:22.323690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.817 [2024-12-09 17:37:22.334704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.817 [2024-12-09 17:37:22.334731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.817 [2024-12-09 17:37:22.334740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.817 [2024-12-09 17:37:22.347121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:55.817 [2024-12-09 17:37:22.347141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.817 [2024-12-09 17:37:22.347149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.358700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.358724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.358733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.367610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.367632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.367641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.378118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.378140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.378149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.387447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.387468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.387476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.397217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.397237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.397246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.406896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.406917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.406925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.415279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.415299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.415307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.425566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.425586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.425598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.436195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.436217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.436225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.445606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.445626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.445635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.454578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.454598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.454606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.463676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.463696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.463705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.475372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.475394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.475402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.486364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.486384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.486393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.495891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.495911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.495919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.504324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.504345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.504353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.516058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.516078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.516087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.526308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.526329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.526337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.534391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.534411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.534419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.546596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.108 [2024-12-09 17:37:22.546615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.108 [2024-12-09 17:37:22.546624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.108 [2024-12-09 17:37:22.559089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.109 [2024-12-09 17:37:22.559110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.109 [2024-12-09 17:37:22.559118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.109 [2024-12-09 17:37:22.571140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.109 [2024-12-09 17:37:22.571160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.109 [2024-12-09 17:37:22.571173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.109 [2024-12-09 17:37:22.582186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.109 [2024-12-09 17:37:22.582206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.109 [2024-12-09 17:37:22.582215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.109 [2024-12-09 17:37:22.595773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.109 [2024-12-09 17:37:22.595795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.109 [2024-12-09 17:37:22.595803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.109 [2024-12-09 17:37:22.604571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.109 [2024-12-09 17:37:22.604591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.109 [2024-12-09 17:37:22.604603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.109 [2024-12-09 17:37:22.615449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.109 [2024-12-09 17:37:22.615471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.109 [2024-12-09 17:37:22.615480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.109 [2024-12-09 17:37:22.624836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.109 [2024-12-09 17:37:22.624858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.109 [2024-12-09 17:37:22.624866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.109 [2024-12-09 17:37:22.633502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.109 [2024-12-09 17:37:22.633524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.109 [2024-12-09 17:37:22.633532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.645486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.645509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.645517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 24577.00 IOPS, 96.00 MiB/s [2024-12-09T16:37:23.013Z] [2024-12-09 17:37:22.655065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.655087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.655096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.666041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.666062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.666071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.676591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.676613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.676622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.684819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.684839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.684848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.694805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.694831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.694839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.705791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.705813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.705821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.715509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.715531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.715539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.723312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.723333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.723342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.733693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.733713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.733721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.745979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.745999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.746007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.755247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.755269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.755277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.765884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.765906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.765914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.775495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.775516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.775524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.783938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.783960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.783968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.795268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.795290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.795299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.804653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.804675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.804683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.813965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.813986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.813994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.823306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.823328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.823336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.832625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.832646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.832654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.841932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.841953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.841962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.851769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.851790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.851798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.861198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.861220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.861231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.871038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.871060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.871068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.879764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.879785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.879793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.890028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.890050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.890058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.899754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.899776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.899784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.908132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.908153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.908161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.920679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.920701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.920709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.928991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.929012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.929021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.938990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.939012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.939021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.949861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.949883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.949892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.473 [2024-12-09 17:37:22.960618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.473 [2024-12-09 17:37:22.960639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.473 [2024-12-09 17:37:22.960648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:22.968981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:22.969006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:22.969014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:22.980518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:22.980539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:22.980548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:22.988565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:22.988586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:22.988594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:22.999886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:22.999907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:22.999916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.010500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.010521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.010529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.018227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.018247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.018255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.029280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.029301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.029313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.039629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.039651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.039659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.049459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.049480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.049488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.057803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.057824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.057832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.068334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.068354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.068362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.079590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.079611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.079619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.087411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.087432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.087441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.097022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.097043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.097051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.108124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.108146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.108154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.118694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.118718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.118726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.131441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.131462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.131470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.139702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.139722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.139730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.151097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.151118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.151126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.160880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.160900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.160908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.168938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.168958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.732 [2024-12-09 17:37:23.168967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.732 [2024-12-09 17:37:23.179215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.732 [2024-12-09 17:37:23.179236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.733 [2024-12-09 17:37:23.179244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.733 [2024-12-09 17:37:23.191337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.733 [2024-12-09 17:37:23.191358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.733 [2024-12-09 17:37:23.191367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.733 [2024-12-09 17:37:23.199813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.733 [2024-12-09 17:37:23.199834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.733 [2024-12-09 17:37:23.199842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.733 [2024-12-09 17:37:23.210793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.733 [2024-12-09 17:37:23.210814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.733 [2024-12-09 17:37:23.210822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.733 [2024-12-09 17:37:23.223032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.733 [2024-12-09 17:37:23.223053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.733 [2024-12-09 17:37:23.223061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.733 [2024-12-09 17:37:23.234303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.733 [2024-12-09 17:37:23.234324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.733 [2024-12-09 17:37:23.234332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.733 [2024-12-09 17:37:23.244187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.733 [2024-12-09 17:37:23.244208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.733 [2024-12-09 17:37:23.244216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.733 [2024-12-09 17:37:23.253352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.733 [2024-12-09 17:37:23.253373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.733 [2024-12-09 17:37:23.253381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.733 [2024-12-09 17:37:23.262138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.733 [2024-12-09 17:37:23.262159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.733 [2024-12-09 17:37:23.262171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.272009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.272031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.272040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.281856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.281877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.281885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.292120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.292140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.292152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.299705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.299725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.299733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.310251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.310272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.310280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.322886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.322907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.322916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.334640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.334661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.334669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.344777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.344797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.344805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.355011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.355031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.355039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.363946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.363966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.363974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.373918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.373939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.373947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.382779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.382800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.382808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.392300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.392321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.392329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.401710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.401730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.401738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.411209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.411229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.411237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.419941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.419961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.419969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.429127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.429147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.429155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.438836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.438856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.438864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.446861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.992 [2024-12-09 17:37:23.446882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.992 [2024-12-09 17:37:23.446890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.992 [2024-12-09 17:37:23.457839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.993 [2024-12-09 17:37:23.457860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.993 [2024-12-09 17:37:23.457872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.993 [2024-12-09 17:37:23.468395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.993 [2024-12-09 17:37:23.468415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.993 [2024-12-09 17:37:23.468423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.993 [2024-12-09 17:37:23.479144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.993 [2024-12-09 17:37:23.479170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.993 [2024-12-09 17:37:23.479178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.993 [2024-12-09 17:37:23.487945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.993 [2024-12-09 17:37:23.487966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.993 [2024-12-09 17:37:23.487974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.993 [2024-12-09 17:37:23.497273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.993 [2024-12-09 17:37:23.497293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.993 [2024-12-09 17:37:23.497301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.993 [2024-12-09 17:37:23.506391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.993 [2024-12-09 17:37:23.506411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.993 [2024-12-09 17:37:23.506419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.993 [2024-12-09 17:37:23.515298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.993 [2024-12-09 17:37:23.515318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.993 [2024-12-09 17:37:23.515326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.993 [2024-12-09 17:37:23.526724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:56.993 [2024-12-09 17:37:23.526745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.993 [2024-12-09 17:37:23.526753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.252 [2024-12-09 17:37:23.538284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:57.252 [2024-12-09 17:37:23.538305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.252 [2024-12-09 17:37:23.538313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.252 [2024-12-09 17:37:23.549015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:57.252 [2024-12-09 17:37:23.549039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.252 [2024-12-09 17:37:23.549047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.252 [2024-12-09 17:37:23.557553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:57.253 [2024-12-09 17:37:23.557573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.253 [2024-12-09 17:37:23.557581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.253 [2024-12-09 17:37:23.569605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:57.253 [2024-12-09 17:37:23.569625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.253 [2024-12-09 17:37:23.569633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.253 [2024-12-09 17:37:23.580468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:57.253 [2024-12-09 17:37:23.580489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.253 [2024-12-09 17:37:23.580497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.253 [2024-12-09 17:37:23.589768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:57.253 [2024-12-09 17:37:23.589788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.253 [2024-12-09 17:37:23.589796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.253 [2024-12-09 17:37:23.597638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:57.253 [2024-12-09 17:37:23.597659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.253 [2024-12-09 17:37:23.597667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.253 [2024-12-09 17:37:23.608974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:57.253 [2024-12-09 17:37:23.608994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.253 [2024-12-09 17:37:23.609002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.253 [2024-12-09 17:37:23.620614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:57.253 [2024-12-09 17:37:23.620633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.253 [2024-12-09 17:37:23.620641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.253 [2024-12-09 17:37:23.628787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:57.253 [2024-12-09 17:37:23.628807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.253 [2024-12-09 17:37:23.628816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.253 [2024-12-09 17:37:23.638367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:57.253 [2024-12-09 17:37:23.638387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.253 [2024-12-09 17:37:23.638395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.253 [2024-12-09 17:37:23.647713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a3590) 00:26:57.253 [2024-12-09 17:37:23.647733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.253 [2024-12-09 17:37:23.647741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.253 25144.50 IOPS, 98.22 MiB/s 00:26:57.253 Latency(us) 00:26:57.253 [2024-12-09T16:37:23.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.253 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:57.253 nvme0n1 : 2.00 25165.33 98.30 0.00 0.00 5081.14 2527.82 18225.25 00:26:57.253 [2024-12-09T16:37:23.793Z] =================================================================================================================== 00:26:57.253 [2024-12-09T16:37:23.793Z] Total : 25165.33 98.30 0.00 0.00 5081.14 2527.82 18225.25 00:26:57.253 { 00:26:57.253 "results": [ 00:26:57.253 { 00:26:57.253 "job": "nvme0n1", 00:26:57.253 "core_mask": "0x2", 00:26:57.253 "workload": "randread", 00:26:57.253 "status": "finished", 00:26:57.253 "queue_depth": 128, 00:26:57.253 "io_size": 4096, 00:26:57.253 "runtime": 2.003431, 00:26:57.253 "iops": 25165.32887830926, 00:26:57.253 "mibps": 98.30206593089555, 00:26:57.253 "io_failed": 0, 00:26:57.253 "io_timeout": 0, 00:26:57.253 "avg_latency_us": 5081.141780559657, 00:26:57.253 "min_latency_us": 2527.8171428571427, 00:26:57.253 "max_latency_us": 18225.249523809525 00:26:57.253 } 00:26:57.253 ], 00:26:57.253 "core_count": 1 00:26:57.253 } 00:26:57.253 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:57.253 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:57.253 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:57.253 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:57.253 | .driver_specific 00:26:57.253 | .nvme_error 00:26:57.253 | .status_code 00:26:57.253 | .command_transient_transport_error' 00:26:57.512 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:26:57.512 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2049119 00:26:57.512 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2049119 ']' 00:26:57.512 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2049119 00:26:57.512 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:57.512 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.512 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2049119 00:26:57.512 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:57.512 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:57.512 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2049119' 00:26:57.512 killing process with pid 2049119 00:26:57.512 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2049119 00:26:57.512 Received shutdown signal, test time was about 2.000000 seconds 00:26:57.512 00:26:57.512 Latency(us) 00:26:57.512 [2024-12-09T16:37:24.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.512 [2024-12-09T16:37:24.052Z] =================================================================================================================== 00:26:57.512 [2024-12-09T16:37:24.052Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.512 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2049119 00:26:57.771 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:57.771 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:57.771 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:57.772 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:57.772 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:57.772 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2049598 00:26:57.772 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2049598 /var/tmp/bperf.sock 00:26:57.772 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:57.772 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2049598 ']' 00:26:57.772 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:57.772 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:57.772 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:57.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:57.772 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:57.772 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.772 [2024-12-09 17:37:24.129894] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:26:57.772 [2024-12-09 17:37:24.129940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049598 ] 00:26:57.772 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:57.772 Zero copy mechanism will not be used. 00:26:57.772 [2024-12-09 17:37:24.203650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.772 [2024-12-09 17:37:24.241790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.031 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.031 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:58.031 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:58.031 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:58.031 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:58.031 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.031 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:58.031 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.031 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.031 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.599 nvme0n1 00:26:58.599 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:58.599 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.599 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:58.599 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.599 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:58.599 17:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:58.600 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.600 Zero copy mechanism will not be used. 00:26:58.600 Running I/O for 2 seconds... 00:26:58.600 [2024-12-09 17:37:25.034421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.034453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.034464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.039601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.039627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.039636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.044740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.044769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.044777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.049889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.049912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.049920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.055010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.055032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.055040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.060532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.060555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.060567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.065620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.065642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.065650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.068404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.068426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.068434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.073549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.073570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.073579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.078676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.078698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.078706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.083823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.083844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.083852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.089008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.089028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.089037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.094073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.094094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.094102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.099172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.099193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.099201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.104385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.104406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.104415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.109458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.109477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.109485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.114517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.114537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.114545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.119642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.119662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.119670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.124908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.124930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.124938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.130102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.130122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.130130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.600 [2024-12-09 17:37:25.136105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.600 [2024-12-09 17:37:25.136129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.600 [2024-12-09 17:37:25.136138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.858 [2024-12-09 17:37:25.143324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.858 [2024-12-09 17:37:25.143346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.858 [2024-12-09 17:37:25.143356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.858 [2024-12-09 17:37:25.150560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.858 [2024-12-09 17:37:25.150583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.858 [2024-12-09 17:37:25.150595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.858 [2024-12-09 17:37:25.158476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.858 [2024-12-09 17:37:25.158499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.858 [2024-12-09 17:37:25.158508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.858 [2024-12-09 17:37:25.166076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.858 [2024-12-09 17:37:25.166098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.858 [2024-12-09 17:37:25.166107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.858 [2024-12-09 17:37:25.172792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.858 [2024-12-09 17:37:25.172814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.172823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.179644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.179667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.179675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.185015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.185037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.185045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.190241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.190262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.190271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.195463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.195484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.195493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.200647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.200669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.200677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.205890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.205916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.205925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.211088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.211110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.211118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.216295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.216315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.216323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.221370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.221392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.221400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.226532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.226552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.226560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.231656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.231677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.231685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.236796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.236817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.236824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.241900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.241922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.241930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.247025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.247045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.247053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.252147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.252173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.252182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.257273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.257294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.257302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.262455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.262475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.262482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.267588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.267609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.267617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.272696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.272717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.272725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.277838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.277859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.277867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.283000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.283021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.283028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.288175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.288195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.288203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.293306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.293327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.293338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.298561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.298582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.298590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.303687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.303707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.303715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.308813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.308833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.308841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.313856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.313877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.313884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.318971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.318991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.318998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.324092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.324111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.324119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.329586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.329607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.329615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.336130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.336151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.336160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.343262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.343287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.343295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.349714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.349736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.349745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.356824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.356847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.356855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.363456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.363477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.363485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.368729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.368749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.368757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.373768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.373790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.373799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.378900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.378921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.378930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.383899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.383920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.383928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.389070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.389090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.389098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.859 [2024-12-09 17:37:25.394149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:58.859 [2024-12-09 17:37:25.394191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.859 [2024-12-09 17:37:25.394200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.399181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.399203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.399211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.404389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.404409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.404417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.409494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.409515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.409523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.414719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.414740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.414748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.419884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.419905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.419913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.425083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.425104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.425112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.430281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.430302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.430310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.435431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.435452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.435464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.440592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.440613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.440621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.445767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.445788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.445796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.450893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.450913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.450921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.456127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.456148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.456156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.461267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.461287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.461294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.466334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.466355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.466363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.471380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.471401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.471409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.476681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.476702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.476710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.481904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.481925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.481932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.486981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.487002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.487011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.492162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.492189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.492197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.497453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.497474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.497482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.502554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.502574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.502582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.507601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.507622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.507630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.512789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.512809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.512817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.517902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.517922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.118 [2024-12-09 17:37:25.517929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.118 [2024-12-09 17:37:25.523890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.118 [2024-12-09 17:37:25.523911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.523922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.529398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.529419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.529427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.536019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.536040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.536048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.543185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.543206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.543215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.550576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.550598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.550607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.557472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.557494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.557504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.562648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.562671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.562679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.567719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.567740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.567748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.572801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.572823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.572832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.577849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.577874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.577882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.582935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.582958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.582967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.588465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.588489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.588501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.593788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.593812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.593820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.599111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.599133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.599142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.604304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.604325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.604334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.609197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.609218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.609226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.614145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.614172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.614180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.619153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.619182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.619190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.624072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.624094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.624102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.629013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.629033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.629042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.633946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.633969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.633976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.638833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.638855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.638863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.643808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.643830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.643838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.648747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.648768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.648776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.119 [2024-12-09 17:37:25.653729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.119 [2024-12-09 17:37:25.653751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.119 [2024-12-09 17:37:25.653759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.380 [2024-12-09 17:37:25.658636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.380 [2024-12-09 17:37:25.658658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.380 [2024-12-09 17:37:25.658667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.380 [2024-12-09 17:37:25.663675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.380 [2024-12-09 17:37:25.663696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.380 [2024-12-09 17:37:25.663708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.380 [2024-12-09 17:37:25.668719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.380 [2024-12-09 17:37:25.668741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.380 [2024-12-09 17:37:25.668749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.380 [2024-12-09 17:37:25.673808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.380 [2024-12-09 17:37:25.673831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.380 [2024-12-09 17:37:25.673846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.380 [2024-12-09 17:37:25.678983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.380 [2024-12-09 17:37:25.679005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.380 [2024-12-09 17:37:25.679013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.380 [2024-12-09 17:37:25.684155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.380 [2024-12-09 17:37:25.684183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.380 [2024-12-09 17:37:25.684192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.380 [2024-12-09 17:37:25.689301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.380 [2024-12-09 17:37:25.689322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.380 [2024-12-09 17:37:25.689330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.380 [2024-12-09 17:37:25.694403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.380 [2024-12-09 17:37:25.694425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.380 [2024-12-09 17:37:25.694432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.380 [2024-12-09 17:37:25.699589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.380 [2024-12-09 17:37:25.699610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.380 [2024-12-09 17:37:25.699618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.380 [2024-12-09 17:37:25.704769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.380 [2024-12-09 17:37:25.704791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.380 [2024-12-09 17:37:25.704799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.380 [2024-12-09 17:37:25.709924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.380 [2024-12-09 17:37:25.709950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.709958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.715097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.715124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.715131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.720326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.720348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.720355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.725466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.725487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.725495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.730587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.730608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.730616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.736023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.736045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.736053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.741771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.741793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.741801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.747154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.747183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.747191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.754164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.754195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.754204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.760952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.760975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.760984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.767196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.767219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.767227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.773161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.773190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.773199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.779300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.779322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.779331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.784804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.784827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.784835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.791464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.791487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.791495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.798641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.798663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.798671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.805518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.805542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.805551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.813718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.813742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.813754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.820044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.820066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.820074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.823191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.823211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.823220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.828515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.828537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.828545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.834060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.834082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.834090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.839053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.839074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.839083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.844076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.844098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.844106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.849343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.849365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.849374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.854583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.854605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.854613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.859874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.859896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.859905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.381 [2024-12-09 17:37:25.864972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.381 [2024-12-09 17:37:25.864995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.381 [2024-12-09 17:37:25.865003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.382 [2024-12-09 17:37:25.870403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.382 [2024-12-09 17:37:25.870426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.382 [2024-12-09 17:37:25.870435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.382 [2024-12-09 17:37:25.875646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.382 [2024-12-09 17:37:25.875669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.382 [2024-12-09 17:37:25.875677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.382 [2024-12-09 17:37:25.880926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.382 [2024-12-09 17:37:25.880948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.382 [2024-12-09 17:37:25.880955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.382 [2024-12-09 17:37:25.886157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.382 [2024-12-09 17:37:25.886187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.382 [2024-12-09 17:37:25.886196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.382 [2024-12-09 17:37:25.891370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.382 [2024-12-09 17:37:25.891391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.382 [2024-12-09 17:37:25.891400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.382 [2024-12-09 17:37:25.895973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.382 [2024-12-09 17:37:25.895996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.382 [2024-12-09 17:37:25.896005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.382 [2024-12-09 17:37:25.901185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.382 [2024-12-09 17:37:25.901207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.382 [2024-12-09 17:37:25.901220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.382 [2024-12-09 17:37:25.906301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.382 [2024-12-09 17:37:25.906324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.382 [2024-12-09 17:37:25.906333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.382 [2024-12-09 17:37:25.911606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.382 [2024-12-09 17:37:25.911627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.382 [2024-12-09 17:37:25.911635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.382 [2024-12-09 17:37:25.916882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.382 [2024-12-09 17:37:25.916905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.382 [2024-12-09 17:37:25.916913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.641 [2024-12-09 17:37:25.922083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.641 [2024-12-09 17:37:25.922106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.641 [2024-12-09 17:37:25.922114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.641 [2024-12-09 17:37:25.927396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.641 [2024-12-09 17:37:25.927417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.641 [2024-12-09 17:37:25.927425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.641 [2024-12-09 17:37:25.932703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.641 [2024-12-09 17:37:25.932725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.641 [2024-12-09 17:37:25.932734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.641 [2024-12-09 17:37:25.937973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.641 [2024-12-09 17:37:25.937995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.641 [2024-12-09 17:37:25.938003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.641 [2024-12-09 17:37:25.943282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.641 [2024-12-09 17:37:25.943303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.641 [2024-12-09 17:37:25.943311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.641 [2024-12-09 17:37:25.948559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.641 [2024-12-09 17:37:25.948588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.641 [2024-12-09 17:37:25.948596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.641 [2024-12-09 17:37:25.953395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.641 [2024-12-09 17:37:25.953417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.641 [2024-12-09 17:37:25.953425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.641 [2024-12-09 17:37:25.957950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.641 [2024-12-09 17:37:25.957971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.641 [2024-12-09 17:37:25.957979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.641 [2024-12-09 17:37:25.961208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.641 [2024-12-09 17:37:25.961228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.641 [2024-12-09 17:37:25.961236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.641 [2024-12-09 17:37:25.966528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.641 [2024-12-09 17:37:25.966548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.641 [2024-12-09 17:37:25.966556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.641 [2024-12-09 17:37:25.971525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.641 [2024-12-09 17:37:25.971545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.641 [2024-12-09 17:37:25.971554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.641 [2024-12-09 17:37:25.977754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:25.977777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:25.977785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:25.983301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:25.983322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:25.983331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:25.988952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:25.988973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:25.988981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:25.994154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:25.994180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:25.994189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:25.999326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:25.999347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:25.999355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.004562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.004583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.004591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.009787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.009808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.009816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.014985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.015007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.015016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.019989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.020011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.020020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.025113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.025134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.025142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.642 5706.00 IOPS, 713.25 MiB/s [2024-12-09T16:37:26.182Z] [2024-12-09 17:37:26.031630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.031652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.031661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.038270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.038293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.038306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.045449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.045471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.045481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.052655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.052678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.052687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.059781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.059803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.059812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.067662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.067685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.067693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.075173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.075195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.075203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.082562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.082584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.082592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.089942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.089964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.089973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.097134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.097157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.097170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.104995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.105022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.105031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.112646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.112669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.112678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.120284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.120306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.120314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.127687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.127710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.127719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.135103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.135125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.135134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.142538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.142560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.142569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.150153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.150182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.150191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.156061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.156083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.156091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.161960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.161983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.161991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.168219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.168241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.168249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.642 [2024-12-09 17:37:26.175073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.642 [2024-12-09 17:37:26.175095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.642 [2024-12-09 17:37:26.175103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.182202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.182225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.901 [2024-12-09 17:37:26.182233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.188600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.188622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.901 [2024-12-09 17:37:26.188631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.194740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.194762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.901 [2024-12-09 17:37:26.194771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.199962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.199984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.901 [2024-12-09 17:37:26.199992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.205268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.205289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.901 [2024-12-09 17:37:26.205297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.210978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.211000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.901 [2024-12-09 17:37:26.211008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.216334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.216360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.901 [2024-12-09 17:37:26.216368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.221670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.221691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.901 [2024-12-09 17:37:26.221699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.227029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.227051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.901 [2024-12-09 17:37:26.227059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.232709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.232731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.901 [2024-12-09 17:37:26.232739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.237664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.237685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.901 [2024-12-09 17:37:26.237694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.242854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.242876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.901 [2024-12-09 17:37:26.242884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.901 [2024-12-09 17:37:26.248293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.901 [2024-12-09 17:37:26.248314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.248322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.253709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.253730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.253738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.258970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.258991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.258999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.264297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.264318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.264326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.269487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.269508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.269516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.274621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.274642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.274651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.279703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.279724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.279732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.284841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.284862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.284870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.289967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.289988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.289996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.295328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.295349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.295357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.300589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.300611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.300620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.305883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.305904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.305916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.311133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.311154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.311162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.316574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.316595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.316603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.322019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.322039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.322048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.327287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.327308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.327317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.331679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.331700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.331708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.336715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.336736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.336744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.341811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.341833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.341841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.347023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.347045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.347053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.352236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.352259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.352267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.357617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.357639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.357647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.363098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.363121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.363129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.368471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.368493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.368501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.373739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.373761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.373769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.379024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.379046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.379054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.384415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.384437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.384445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.389788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.902 [2024-12-09 17:37:26.389810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.902 [2024-12-09 17:37:26.389818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.902 [2024-12-09 17:37:26.395102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.903 [2024-12-09 17:37:26.395123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.903 [2024-12-09 17:37:26.395131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.903 [2024-12-09 17:37:26.400430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.903 [2024-12-09 17:37:26.400452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.903 [2024-12-09 17:37:26.400460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.903 [2024-12-09 17:37:26.405718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.903 [2024-12-09 17:37:26.405740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.903 [2024-12-09 17:37:26.405748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.903 [2024-12-09 17:37:26.411133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.903 [2024-12-09 17:37:26.411154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.903 [2024-12-09 17:37:26.411163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.903 [2024-12-09 17:37:26.416511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.903 [2024-12-09 17:37:26.416532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.903 [2024-12-09 17:37:26.416539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.903 [2024-12-09 17:37:26.421897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.903 [2024-12-09 17:37:26.421918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.903 [2024-12-09 17:37:26.421927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.903 [2024-12-09 17:37:26.427161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.903 [2024-12-09 17:37:26.427190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.903 [2024-12-09 17:37:26.427198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.903 [2024-12-09 17:37:26.432285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.903 [2024-12-09 17:37:26.432306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.903 [2024-12-09 17:37:26.432315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.903 [2024-12-09 17:37:26.437544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:26:59.903 [2024-12-09 17:37:26.437565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.903 [2024-12-09 17:37:26.437574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.442758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.442779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.442790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.447965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.447986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.447994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.453015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.453035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.453043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.458145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.458171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.458181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.463293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.463314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.463322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.468484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.468505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.468513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.473930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.473953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.473961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.479451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.479473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.479481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.484868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.484890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.484898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.490216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.490241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.490249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.495912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.495935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.495943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.501246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.501267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.501275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.506664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.506686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.506695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.511972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.511995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.512003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.517237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.517258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.517266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.522544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.522566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.522574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.527891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.527912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.527921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.533271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.533292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.163 [2024-12-09 17:37:26.533304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.163 [2024-12-09 17:37:26.538772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.163 [2024-12-09 17:37:26.538794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.538802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.544099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.544120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.544128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.550374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.550397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.550406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.557959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.557982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.557990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.564725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.564748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.564757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.571231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.571253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.571261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.577482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.577505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.577513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.583128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.583150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.583158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.589706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.589732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.589740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.596896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.596919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.596928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.604260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.604282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.604290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.611950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.611973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.611981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.618577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.618599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.618607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.625062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.625084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.625093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.631619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.631641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.631650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.637394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.637416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.637424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.642999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.643020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.643029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.648974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.648996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.649003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.654624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.654646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.654654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.660250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.660272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.660280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.665254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.665275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.665284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.670408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.670430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.670438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.674000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.674022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.674031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.678374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.678396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.678404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.683710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.683732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.683739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.688995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.689016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.689031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.694444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.694466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.694474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.164 [2024-12-09 17:37:26.700964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.164 [2024-12-09 17:37:26.700987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.164 [2024-12-09 17:37:26.700995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.707559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.707582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.707590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.715434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.715457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.715466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.723373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.723395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.723404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.731543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.731565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.731574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.739467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.739489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.739498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.747295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.747317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.747327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.754929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.754955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.754963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.762905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.762927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.762936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.771074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.771096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.771105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.779859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.779882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.779892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.787501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.787523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.787532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.795984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.796006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.796015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.803831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.803853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.803862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.812233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.812256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.812265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.819826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.425 [2024-12-09 17:37:26.819848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.425 [2024-12-09 17:37:26.819857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.425 [2024-12-09 17:37:26.826022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.826044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.826052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.832152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.832180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.832189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.838766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.838787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.838796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.845250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.845270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.845279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.851482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.851504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.851512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.856899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.856919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.856927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.862176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.862197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.862205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.868546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.868568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.868576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.875817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.875840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.875852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.882156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.882185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.882196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.888474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.888497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.888505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.894810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.894833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.894842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.900011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.900034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.900042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.905289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.905311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.905319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.910620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.910640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.910648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.915891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.915912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.915920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.921242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.921273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.921282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.926646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.926667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.926675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.932084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.932105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.932113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.937404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.937441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.937450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.942688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.942709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.942717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.947915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.947937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.947945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.953293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.953314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.953323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.958546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.958568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.958576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.426 [2024-12-09 17:37:26.963831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.426 [2024-12-09 17:37:26.963853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.426 [2024-12-09 17:37:26.963861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.686 [2024-12-09 17:37:26.969120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.686 [2024-12-09 17:37:26.969142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.686 [2024-12-09 17:37:26.969154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.686 [2024-12-09 17:37:26.974381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.686 [2024-12-09 17:37:26.974413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.686 [2024-12-09 17:37:26.974421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.686 [2024-12-09 17:37:26.979623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.686 [2024-12-09 17:37:26.979646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.686 [2024-12-09 17:37:26.979654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.686 [2024-12-09 17:37:26.984721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.686 [2024-12-09 17:37:26.984743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.686 [2024-12-09 17:37:26.984751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.686 [2024-12-09 17:37:26.989810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.686 [2024-12-09 17:37:26.989833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.686 [2024-12-09 17:37:26.989841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.686 [2024-12-09 17:37:26.994866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.686 [2024-12-09 17:37:26.994888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.686 [2024-12-09 17:37:26.994897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.686 [2024-12-09 17:37:26.999976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.686 [2024-12-09 17:37:26.999997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.686 [2024-12-09 17:37:27.000005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.686 [2024-12-09 17:37:27.005131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.686 [2024-12-09 17:37:27.005152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.686 [2024-12-09 17:37:27.005160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.686 [2024-12-09 17:37:27.010270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.686 [2024-12-09 17:37:27.010290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.686 [2024-12-09 17:37:27.010298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.686 [2024-12-09 17:37:27.015467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.686 [2024-12-09 17:37:27.015491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.686 [2024-12-09 17:37:27.015500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.686 [2024-12-09 17:37:27.020547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.686 [2024-12-09 17:37:27.020568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.686 [2024-12-09 17:37:27.020576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.686 [2024-12-09 17:37:27.025674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.686 [2024-12-09 17:37:27.025695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.686 [2024-12-09 17:37:27.025703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.686 5466.00 IOPS, 683.25 MiB/s [2024-12-09T16:37:27.226Z] [2024-12-09 17:37:27.031915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91640) 00:27:00.687 [2024-12-09 17:37:27.031937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.687 [2024-12-09 17:37:27.031946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.687 00:27:00.687 Latency(us) 00:27:00.687 [2024-12-09T16:37:27.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.687 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:00.687 nvme0n1 : 2.00 5464.23 683.03 0.00 0.00 2925.15 631.95 13232.03 00:27:00.687 [2024-12-09T16:37:27.227Z] =================================================================================================================== 00:27:00.687 [2024-12-09T16:37:27.227Z] Total : 5464.23 683.03 0.00 0.00 2925.15 631.95 13232.03 00:27:00.687 { 00:27:00.687 "results": [ 00:27:00.687 { 00:27:00.687 "job": "nvme0n1", 00:27:00.687 "core_mask": "0x2", 00:27:00.687 "workload": "randread", 00:27:00.687 "status": "finished", 00:27:00.687 "queue_depth": 16, 00:27:00.687 "io_size": 131072, 00:27:00.687 "runtime": 2.003576, 00:27:00.687 "iops": 5464.229956837175, 00:27:00.687 "mibps": 683.0287446046469, 00:27:00.687 "io_failed": 0, 00:27:00.687 "io_timeout": 0, 00:27:00.687 "avg_latency_us": 2925.1521965307866, 00:27:00.687 "min_latency_us": 631.9542857142857, 00:27:00.687 "max_latency_us": 13232.030476190475 00:27:00.687 } 00:27:00.687 ], 00:27:00.687 "core_count": 1 00:27:00.687 } 00:27:00.687 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:00.687 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:00.687 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:00.687 | .driver_specific 00:27:00.687 | .nvme_error 00:27:00.687 | .status_code 00:27:00.687 | .command_transient_transport_error' 00:27:00.687 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:00.945 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 354 > 0 )) 00:27:00.945 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2049598 00:27:00.945 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2049598 ']' 00:27:00.945 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2049598 00:27:00.945 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:00.945 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2049598 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2049598' 00:27:00.946 killing process with pid 2049598 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2049598 00:27:00.946 Received shutdown signal, test time was about 2.000000 seconds 00:27:00.946 00:27:00.946 Latency(us) 00:27:00.946 [2024-12-09T16:37:27.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.946 [2024-12-09T16:37:27.486Z] =================================================================================================================== 00:27:00.946 [2024-12-09T16:37:27.486Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2049598 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2050265 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2050265 /var/tmp/bperf.sock 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2050265 ']' 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:00.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.946 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.204 [2024-12-09 17:37:27.505572] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:27:01.204 [2024-12-09 17:37:27.505619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2050265 ] 00:27:01.204 [2024-12-09 17:37:27.580287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.204 [2024-12-09 17:37:27.620661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.204 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.205 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:01.205 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:01.205 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:01.464 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:01.464 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.464 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.464 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.464 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.464 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.723 nvme0n1 00:27:01.982 17:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:01.982 17:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.982 17:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.982 17:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.982 17:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:01.982 17:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:01.982 Running I/O for 2 seconds... 00:27:01.982 [2024-12-09 17:37:28.388028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef4f40 00:27:01.982 [2024-12-09 17:37:28.389018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.389047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.399815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee38d0 00:27:01.982 [2024-12-09 17:37:28.401246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.401268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.406518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee0630 00:27:01.982 [2024-12-09 17:37:28.407209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.407229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.417722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef8618 00:27:01.982 [2024-12-09 17:37:28.418781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.418800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.426936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef96f8 00:27:01.982 [2024-12-09 17:37:28.427946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.427965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.435639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee5220 00:27:01.982 [2024-12-09 17:37:28.436656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.436676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.444788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efa7d8 00:27:01.982 [2024-12-09 17:37:28.445765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.445784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.453344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef2948 00:27:01.982 [2024-12-09 17:37:28.454154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.454177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.461976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee6b70 00:27:01.982 [2024-12-09 17:37:28.462610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.462630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.470889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef4f40 00:27:01.982 [2024-12-09 17:37:28.471622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.471641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.480158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef7100 00:27:01.982 [2024-12-09 17:37:28.480889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.480909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.490463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eec840 00:27:01.982 [2024-12-09 17:37:28.491596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.491615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.499748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efe2e8 00:27:01.982 [2024-12-09 17:37:28.500973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.500993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.508561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef5378 00:27:01.982 [2024-12-09 17:37:28.509567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.509587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:01.982 [2024-12-09 17:37:28.519542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efbcf0 00:27:01.982 [2024-12-09 17:37:28.521173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.982 [2024-12-09 17:37:28.521193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:02.242 [2024-12-09 17:37:28.526104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef3a28 00:27:02.242 [2024-12-09 17:37:28.526869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.242 [2024-12-09 17:37:28.526888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:02.242 [2024-12-09 17:37:28.535377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef5be8 00:27:02.242 [2024-12-09 17:37:28.536119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.242 [2024-12-09 17:37:28.536137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:02.242 [2024-12-09 17:37:28.543846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee99d8 00:27:02.242 [2024-12-09 17:37:28.544547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.242 [2024-12-09 17:37:28.544567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:02.242 [2024-12-09 17:37:28.553268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef46d0 00:27:02.242 [2024-12-09 17:37:28.554035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.242 [2024-12-09 17:37:28.554053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:02.242 [2024-12-09 17:37:28.564159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ede8a8 00:27:02.242 [2024-12-09 17:37:28.565312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.565332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.571784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eef6a8 00:27:02.243 [2024-12-09 17:37:28.572316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.572335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.580364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee0ea0 00:27:02.243 [2024-12-09 17:37:28.580898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.580921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.591786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efc998 00:27:02.243 [2024-12-09 17:37:28.593218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.593238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.601170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016edf988 00:27:02.243 [2024-12-09 17:37:28.602707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.602726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.607579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee5ec8 00:27:02.243 [2024-12-09 17:37:28.608271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.608291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.618988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efc560 00:27:02.243 [2024-12-09 17:37:28.620424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.620443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.626791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef9f68 00:27:02.243 [2024-12-09 17:37:28.627695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.627714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.635835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efe2e8 00:27:02.243 [2024-12-09 17:37:28.636882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.636901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.646008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee6fa8 00:27:02.243 [2024-12-09 17:37:28.647474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.647494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.652598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efac10 00:27:02.243 [2024-12-09 17:37:28.653228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.653247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.662241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef5378 00:27:02.243 [2024-12-09 17:37:28.663128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.663147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.671493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee1f80 00:27:02.243 [2024-12-09 17:37:28.671917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.671937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.681838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eebb98 00:27:02.243 [2024-12-09 17:37:28.683000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.683020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.690610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eee190 00:27:02.243 [2024-12-09 17:37:28.691665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.691684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.699466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eea248 00:27:02.243 [2024-12-09 17:37:28.700277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.700297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.708352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eea248 00:27:02.243 [2024-12-09 17:37:28.709231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.709251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.717346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eea248 00:27:02.243 [2024-12-09 17:37:28.718228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.718246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.726278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eea248 00:27:02.243 [2024-12-09 17:37:28.727202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.727221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.736483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eea248 00:27:02.243 [2024-12-09 17:37:28.737861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.737880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.745723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeea00 00:27:02.243 [2024-12-09 17:37:28.747000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.747019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.753269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef7538 00:27:02.243 [2024-12-09 17:37:28.753826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.753845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.762679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efb480 00:27:02.243 [2024-12-09 17:37:28.763345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.763365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:02.243 [2024-12-09 17:37:28.771188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee4578 00:27:02.243 [2024-12-09 17:37:28.771886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.243 [2024-12-09 17:37:28.771905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.782176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef8618 00:27:02.503 [2024-12-09 17:37:28.783704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.783723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.791749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee01f8 00:27:02.503 [2024-12-09 17:37:28.793342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.793360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.798118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef0ff8 00:27:02.503 [2024-12-09 17:37:28.798854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.798873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.807144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef1430 00:27:02.503 [2024-12-09 17:37:28.808044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.808062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.818205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef46d0 00:27:02.503 [2024-12-09 17:37:28.819426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.819448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.826737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efe720 00:27:02.503 [2024-12-09 17:37:28.827954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.827973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.835181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef5be8 00:27:02.503 [2024-12-09 17:37:28.836059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.836077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.844331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eebfd0 00:27:02.503 [2024-12-09 17:37:28.844988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.845007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.852820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eedd58 00:27:02.503 [2024-12-09 17:37:28.854009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.854028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.861201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ede8a8 00:27:02.503 [2024-12-09 17:37:28.861834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.861853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.870485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eee5c8 00:27:02.503 [2024-12-09 17:37:28.871224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.871243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.879495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efc560 00:27:02.503 [2024-12-09 17:37:28.880344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.880363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:02.503 [2024-12-09 17:37:28.888985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eebb98 00:27:02.503 [2024-12-09 17:37:28.889969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.503 [2024-12-09 17:37:28.889988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:28.898992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee0a68 00:27:02.504 [2024-12-09 17:37:28.900111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:28.900130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:28.907678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef3e60 00:27:02.504 [2024-12-09 17:37:28.908868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:28.908886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:28.916429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef6458 00:27:02.504 [2024-12-09 17:37:28.917256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:28.917275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:28.925742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee3498 00:27:02.504 [2024-12-09 17:37:28.926612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:28.926630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:28.935830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eea248 00:27:02.504 [2024-12-09 17:37:28.936855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:28.936874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:28.944796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef5be8 00:27:02.504 [2024-12-09 17:37:28.945817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:28.945836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:28.953848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eee5c8 00:27:02.504 [2024-12-09 17:37:28.954894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:28.954912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:28.962926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeaef0 00:27:02.504 [2024-12-09 17:37:28.963990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:28.964009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:28.972192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeff18 00:27:02.504 [2024-12-09 17:37:28.973343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:28.973362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:28.980704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee95a0 00:27:02.504 [2024-12-09 17:37:28.981723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:28.981742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:28.989389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee4140 00:27:02.504 [2024-12-09 17:37:28.990414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:28.990432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:28.997768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeea00 00:27:02.504 [2024-12-09 17:37:28.998433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:28.998452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:29.006917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eedd58 00:27:02.504 [2024-12-09 17:37:29.007353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:29.007373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:29.016118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef7100 00:27:02.504 [2024-12-09 17:37:29.016889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:29.016907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:29.025132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee1f80 00:27:02.504 [2024-12-09 17:37:29.025905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:29.025924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:02.504 [2024-12-09 17:37:29.034212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eec408 00:27:02.504 [2024-12-09 17:37:29.034982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.504 [2024-12-09 17:37:29.035001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.042748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efef90 00:27:02.764 [2024-12-09 17:37:29.043493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.043511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.052127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeee38 00:27:02.764 [2024-12-09 17:37:29.052852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.052873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.060975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef1868 00:27:02.764 [2024-12-09 17:37:29.061709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.061728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.071001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee6300 00:27:02.764 [2024-12-09 17:37:29.071907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.071927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.080251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef2510 00:27:02.764 [2024-12-09 17:37:29.080903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.080922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.089719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef7da8 00:27:02.764 [2024-12-09 17:37:29.090502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.090522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.098926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee6738 00:27:02.764 [2024-12-09 17:37:29.100027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.100047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.107960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee5658 00:27:02.764 [2024-12-09 17:37:29.109058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.109076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.116927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef4f40 00:27:02.764 [2024-12-09 17:37:29.118036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.118054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.126159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef7538 00:27:02.764 [2024-12-09 17:37:29.127321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.127339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.135257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee99d8 00:27:02.764 [2024-12-09 17:37:29.136377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.136396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.144251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efd208 00:27:02.764 [2024-12-09 17:37:29.145371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.145389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.153414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef1ca0 00:27:02.764 [2024-12-09 17:37:29.154440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.154458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.162590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efd208 00:27:02.764 [2024-12-09 17:37:29.163608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.163627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.171007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eebfd0 00:27:02.764 [2024-12-09 17:37:29.172076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.172095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.180181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efda78 00:27:02.764 [2024-12-09 17:37:29.181160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.181181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.188742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef6020 00:27:02.764 [2024-12-09 17:37:29.189750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.189769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.198237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eebfd0 00:27:02.764 [2024-12-09 17:37:29.199348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.199368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.207710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeea00 00:27:02.764 [2024-12-09 17:37:29.208938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.208957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.217077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeff18 00:27:02.764 [2024-12-09 17:37:29.218427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.218446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.226567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eed4e8 00:27:02.764 [2024-12-09 17:37:29.228054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.228073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:02.764 [2024-12-09 17:37:29.233151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee4de8 00:27:02.764 [2024-12-09 17:37:29.233883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.764 [2024-12-09 17:37:29.233902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:02.765 [2024-12-09 17:37:29.242537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeea00 00:27:02.765 [2024-12-09 17:37:29.243390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.765 [2024-12-09 17:37:29.243408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:02.765 [2024-12-09 17:37:29.251721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efc560 00:27:02.765 [2024-12-09 17:37:29.252571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.765 [2024-12-09 17:37:29.252590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:02.765 [2024-12-09 17:37:29.261004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eff3c8 00:27:02.765 [2024-12-09 17:37:29.261878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.765 [2024-12-09 17:37:29.261898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:02.765 [2024-12-09 17:37:29.270001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee5a90 00:27:02.765 [2024-12-09 17:37:29.271005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.765 [2024-12-09 17:37:29.271023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:02.765 [2024-12-09 17:37:29.279512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee0a68 00:27:02.765 [2024-12-09 17:37:29.280624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.765 [2024-12-09 17:37:29.280643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:02.765 [2024-12-09 17:37:29.288935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee5ec8 00:27:02.765 [2024-12-09 17:37:29.290150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.765 [2024-12-09 17:37:29.290176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:02.765 [2024-12-09 17:37:29.298401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeaab8 00:27:02.765 [2024-12-09 17:37:29.299790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.765 [2024-12-09 17:37:29.299809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.024 [2024-12-09 17:37:29.306980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee01f8 00:27:03.024 [2024-12-09 17:37:29.308333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.024 [2024-12-09 17:37:29.308352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.024 [2024-12-09 17:37:29.314798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee0a68 00:27:03.024 [2024-12-09 17:37:29.315527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.024 [2024-12-09 17:37:29.315545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:03.024 [2024-12-09 17:37:29.324292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efb048 00:27:03.024 [2024-12-09 17:37:29.325173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.024 [2024-12-09 17:37:29.325191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.024 [2024-12-09 17:37:29.335534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee01f8 00:27:03.024 [2024-12-09 17:37:29.336926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.024 [2024-12-09 17:37:29.336944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.024 [2024-12-09 17:37:29.342035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee23b8 00:27:03.024 [2024-12-09 17:37:29.342682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.024 [2024-12-09 17:37:29.342701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.024 [2024-12-09 17:37:29.353278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef6cc8 00:27:03.024 [2024-12-09 17:37:29.354437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.024 [2024-12-09 17:37:29.354456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.024 [2024-12-09 17:37:29.362568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eec840 00:27:03.025 [2024-12-09 17:37:29.363267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.363286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.371080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef9b30 00:27:03.025 [2024-12-09 17:37:29.372325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.372344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.025 27842.00 IOPS, 108.76 MiB/s [2024-12-09T16:37:29.565Z] [2024-12-09 17:37:29.380554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efeb58 00:27:03.025 [2024-12-09 17:37:29.381411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.381431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.389052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ede038 00:27:03.025 [2024-12-09 17:37:29.390317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.390336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.398123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ede470 00:27:03.025 [2024-12-09 17:37:29.399127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.399145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.407400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016edece0 00:27:03.025 [2024-12-09 17:37:29.407995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.408015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.416115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee84c0 00:27:03.025 [2024-12-09 17:37:29.416628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.416648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.425651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee27f0 00:27:03.025 [2024-12-09 17:37:29.426209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.426228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.435156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efcdd0 00:27:03.025 [2024-12-09 17:37:29.435836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.435855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.443500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee8d30 00:27:03.025 [2024-12-09 17:37:29.444217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.444236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.453811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef4f40 00:27:03.025 [2024-12-09 17:37:29.455073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.455092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.461304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efcdd0 00:27:03.025 [2024-12-09 17:37:29.461730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.461748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.470679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efb8b8 00:27:03.025 [2024-12-09 17:37:29.471243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.471263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.480140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efd208 00:27:03.025 [2024-12-09 17:37:29.480819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.480838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.488597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee6fa8 00:27:03.025 [2024-12-09 17:37:29.489848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.489866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.497757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eec408 00:27:03.025 [2024-12-09 17:37:29.498684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.498703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.506805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efbcf0 00:27:03.025 [2024-12-09 17:37:29.507701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.507719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.515962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eecc78 00:27:03.025 [2024-12-09 17:37:29.516777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.516796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.526241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee6b70 00:27:03.025 [2024-12-09 17:37:29.527612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.527633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.535412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee5220 00:27:03.025 [2024-12-09 17:37:29.536779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.536798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.541612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee73e0 00:27:03.025 [2024-12-09 17:37:29.542236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.542254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.550786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efc128 00:27:03.025 [2024-12-09 17:37:29.551413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.551432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.025 [2024-12-09 17:37:29.560072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efb480 00:27:03.025 [2024-12-09 17:37:29.560718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.025 [2024-12-09 17:37:29.560738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.571974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef9f68 00:27:03.285 [2024-12-09 17:37:29.573465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.573484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.578438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efc560 00:27:03.285 [2024-12-09 17:37:29.579095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.579115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.587358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eedd58 00:27:03.285 [2024-12-09 17:37:29.588104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.588124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.596810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef1868 00:27:03.285 [2024-12-09 17:37:29.597704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.597722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.605962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee4578 00:27:03.285 [2024-12-09 17:37:29.606405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.606424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.617453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efe2e8 00:27:03.285 [2024-12-09 17:37:29.618948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.618966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.623863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eee190 00:27:03.285 [2024-12-09 17:37:29.624491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.624509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.633258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeb328 00:27:03.285 [2024-12-09 17:37:29.634036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.634055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.643539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eedd58 00:27:03.285 [2024-12-09 17:37:29.644820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.644838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.651766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef2d80 00:27:03.285 [2024-12-09 17:37:29.653003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.653023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.660064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee3d08 00:27:03.285 [2024-12-09 17:37:29.660642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.660661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.669324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef8a50 00:27:03.285 [2024-12-09 17:37:29.669904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.669925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.678674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee9e10 00:27:03.285 [2024-12-09 17:37:29.679101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.679121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.688064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eee5c8 00:27:03.285 [2024-12-09 17:37:29.688623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.688642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.697578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef2948 00:27:03.285 [2024-12-09 17:37:29.698264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.285 [2024-12-09 17:37:29.698283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.285 [2024-12-09 17:37:29.706279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eed920 00:27:03.285 [2024-12-09 17:37:29.707514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.707533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.714008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeff18 00:27:03.286 [2024-12-09 17:37:29.714655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.714673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.723524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efe720 00:27:03.286 [2024-12-09 17:37:29.724286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.724305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.733502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee1b48 00:27:03.286 [2024-12-09 17:37:29.734314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.734333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.742826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016edf550 00:27:03.286 [2024-12-09 17:37:29.743838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.743857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.752364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef3e60 00:27:03.286 [2024-12-09 17:37:29.753628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.753648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.761487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee9e10 00:27:03.286 [2024-12-09 17:37:29.762768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.762790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.770114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef8e88 00:27:03.286 [2024-12-09 17:37:29.771269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.771288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.777956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eea248 00:27:03.286 [2024-12-09 17:37:29.778554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.778574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.786867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee23b8 00:27:03.286 [2024-12-09 17:37:29.787432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.787451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.795518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee1f80 00:27:03.286 [2024-12-09 17:37:29.796200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.796219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.806692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efe2e8 00:27:03.286 [2024-12-09 17:37:29.807638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.807658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.816177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef3a28 00:27:03.286 [2024-12-09 17:37:29.817378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.286 [2024-12-09 17:37:29.817397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:03.286 [2024-12-09 17:37:29.824381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efc128 00:27:03.545 [2024-12-09 17:37:29.825801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.825821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.832423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee5ec8 00:27:03.545 [2024-12-09 17:37:29.833149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.833171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.842834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efda78 00:27:03.545 [2024-12-09 17:37:29.843674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.843694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.851530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eee190 00:27:03.545 [2024-12-09 17:37:29.852359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.852377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.860916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef57b0 00:27:03.545 [2024-12-09 17:37:29.861971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.861990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.870182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee6b70 00:27:03.545 [2024-12-09 17:37:29.871238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.871257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.879889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee6b70 00:27:03.545 [2024-12-09 17:37:29.881096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.881116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.889063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef0bc0 00:27:03.545 [2024-12-09 17:37:29.890274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.890293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.896598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efda78 00:27:03.545 [2024-12-09 17:37:29.897426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.897445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.907507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eedd58 00:27:03.545 [2024-12-09 17:37:29.908850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.908869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.914089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef1ca0 00:27:03.545 [2024-12-09 17:37:29.914776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.914796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.924966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef35f0 00:27:03.545 [2024-12-09 17:37:29.925830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.925849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.933392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef7da8 00:27:03.545 [2024-12-09 17:37:29.934362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.934381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.545 [2024-12-09 17:37:29.942973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef81e0 00:27:03.545 [2024-12-09 17:37:29.944023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.545 [2024-12-09 17:37:29.944043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:29.953965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeaef0 00:27:03.546 [2024-12-09 17:37:29.955554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:29.955573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:29.960358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeee38 00:27:03.546 [2024-12-09 17:37:29.961065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:29.961083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:29.970075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eed0b0 00:27:03.546 [2024-12-09 17:37:29.971075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:29.971095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:29.981131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eefae0 00:27:03.546 [2024-12-09 17:37:29.982575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:29.982594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:29.990267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee0ea0 00:27:03.546 [2024-12-09 17:37:29.991715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:29.991734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:29.997853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee3060 00:27:03.546 [2024-12-09 17:37:29.998525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:29.998547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:30.009309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef6cc8 00:27:03.546 [2024-12-09 17:37:30.011203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:30.011232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:30.017089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eed0b0 00:27:03.546 [2024-12-09 17:37:30.017844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:30.017866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:30.028339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efac10 00:27:03.546 [2024-12-09 17:37:30.029389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:30.029410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:30.038809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efef90 00:27:03.546 [2024-12-09 17:37:30.039992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:30.040013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:30.046412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee0630 00:27:03.546 [2024-12-09 17:37:30.047057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:30.047078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:30.056069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eedd58 00:27:03.546 [2024-12-09 17:37:30.057064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:30.057084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:30.065447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efa3a0 00:27:03.546 [2024-12-09 17:37:30.065986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:30.066006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.546 [2024-12-09 17:37:30.075622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee0ea0 00:27:03.546 [2024-12-09 17:37:30.076417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.546 [2024-12-09 17:37:30.076439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.805 [2024-12-09 17:37:30.086079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee0ea0 00:27:03.805 [2024-12-09 17:37:30.087528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.805 [2024-12-09 17:37:30.087560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.805 [2024-12-09 17:37:30.095649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efa7d8 00:27:03.806 [2024-12-09 17:37:30.097063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.097082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.103733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eeee38 00:27:03.806 [2024-12-09 17:37:30.104536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.104556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.112263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ede038 00:27:03.806 [2024-12-09 17:37:30.113152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.113176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.122075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef4f40 00:27:03.806 [2024-12-09 17:37:30.123275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.123295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.132001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee12d8 00:27:03.806 [2024-12-09 17:37:30.133137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.133158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.141719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef20d8 00:27:03.806 [2024-12-09 17:37:30.142990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.143011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.151480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efcdd0 00:27:03.806 [2024-12-09 17:37:30.152882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.152901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.159902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eedd58 00:27:03.806 [2024-12-09 17:37:30.161286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.161305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.167985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee12d8 00:27:03.806 [2024-12-09 17:37:30.168741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.168760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.177759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee3498 00:27:03.806 [2024-12-09 17:37:30.178640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.178660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.187119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef8618 00:27:03.806 [2024-12-09 17:37:30.188009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.188030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.198170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef1430 00:27:03.806 [2024-12-09 17:37:30.199451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.199471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.206965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee95a0 00:27:03.806 [2024-12-09 17:37:30.208230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.208249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.216525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efac10 00:27:03.806 [2024-12-09 17:37:30.217791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.217811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.224325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efb048 00:27:03.806 [2024-12-09 17:37:30.224767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.224787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.233989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef6020 00:27:03.806 [2024-12-09 17:37:30.234526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.234546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.244605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef3a28 00:27:03.806 [2024-12-09 17:37:30.245995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.246018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.254047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef7538 00:27:03.806 [2024-12-09 17:37:30.255446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.255466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.261744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef35f0 00:27:03.806 [2024-12-09 17:37:30.262326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.262346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.273373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efc560 00:27:03.806 [2024-12-09 17:37:30.274998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.275017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.279955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef0bc0 00:27:03.806 [2024-12-09 17:37:30.280718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.280738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.289163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efa3a0 00:27:03.806 [2024-12-09 17:37:30.290019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.290037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.299538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efd640 00:27:03.806 [2024-12-09 17:37:30.300552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.300572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.308808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef2510 00:27:03.806 [2024-12-09 17:37:30.309871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.309890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.318072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee0a68 00:27:03.806 [2024-12-09 17:37:30.319103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.319122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.327388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016efb048 00:27:03.806 [2024-12-09 17:37:30.328425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.328448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.806 [2024-12-09 17:37:30.336666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eed4e8 00:27:03.806 [2024-12-09 17:37:30.337604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.806 [2024-12-09 17:37:30.337624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:04.066 [2024-12-09 17:37:30.346383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee6300 00:27:04.066 [2024-12-09 17:37:30.347670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.066 [2024-12-09 17:37:30.347689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:04.066 [2024-12-09 17:37:30.354749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016eee5c8 00:27:04.066 [2024-12-09 17:37:30.356055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.066 [2024-12-09 17:37:30.356074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:04.066 [2024-12-09 17:37:30.362987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ef3a28 00:27:04.066 [2024-12-09 17:37:30.363623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.066 [2024-12-09 17:37:30.363642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:04.066 [2024-12-09 17:37:30.373318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b390) with pdu=0x200016ee84c0 00:27:04.066 [2024-12-09 17:37:30.374090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.066 [2024-12-09 17:37:30.374110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:04.066 27801.00 IOPS, 108.60 MiB/s 00:27:04.066 Latency(us) 00:27:04.066 [2024-12-09T16:37:30.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.066 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:04.066 nvme0n1 : 2.00 27825.25 108.69 0.00 0.00 4596.04 1778.83 13107.20 00:27:04.066 [2024-12-09T16:37:30.606Z] =================================================================================================================== 00:27:04.066 [2024-12-09T16:37:30.606Z] Total : 27825.25 108.69 0.00 0.00 4596.04 1778.83 13107.20 00:27:04.066 { 00:27:04.066 "results": [ 00:27:04.066 { 00:27:04.066 "job": "nvme0n1", 00:27:04.066 "core_mask": "0x2", 00:27:04.066 "workload": "randwrite", 00:27:04.066 "status": "finished", 00:27:04.066 "queue_depth": 128, 00:27:04.066 "io_size": 4096, 00:27:04.066 "runtime": 2.002857, 00:27:04.066 "iops": 27825.251628049333, 00:27:04.066 "mibps": 108.6923891720677, 00:27:04.066 "io_failed": 0, 00:27:04.066 "io_timeout": 0, 00:27:04.066 "avg_latency_us": 4596.043437457811, 00:27:04.066 "min_latency_us": 1778.8342857142857, 00:27:04.066 "max_latency_us": 13107.2 00:27:04.066 } 00:27:04.066 ], 00:27:04.066 "core_count": 1 00:27:04.066 } 00:27:04.066 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:04.066 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:04.066 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:04.066 | .driver_specific 00:27:04.066 | .nvme_error 00:27:04.066 | .status_code 00:27:04.066 | .command_transient_transport_error' 00:27:04.066 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:04.066 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:27:04.066 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2050265 00:27:04.066 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2050265 ']' 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2050265 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2050265 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2050265' 00:27:04.325 killing process with pid 2050265 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2050265 00:27:04.325 Received shutdown signal, test time was about 2.000000 seconds 00:27:04.325 00:27:04.325 Latency(us) 00:27:04.325 [2024-12-09T16:37:30.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.325 [2024-12-09T16:37:30.865Z] =================================================================================================================== 00:27:04.325 [2024-12-09T16:37:30.865Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2050265 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2050727 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2050727 /var/tmp/bperf.sock 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2050727 ']' 00:27:04.325 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:04.326 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.326 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:04.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:04.326 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.326 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.326 [2024-12-09 17:37:30.864064] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:27:04.326 [2024-12-09 17:37:30.864111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2050727 ] 00:27:04.326 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:04.326 Zero copy mechanism will not be used. 00:27:04.585 [2024-12-09 17:37:30.938687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.585 [2024-12-09 17:37:30.974819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.585 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.585 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:04.585 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:04.585 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:04.844 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:04.844 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.844 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.844 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.844 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.844 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:05.104 nvme0n1 00:27:05.104 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:05.104 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.104 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.104 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.104 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:05.104 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:05.364 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:05.364 Zero copy mechanism will not be used. 00:27:05.364 Running I/O for 2 seconds... 00:27:05.364 [2024-12-09 17:37:31.691394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.691470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.691498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.697144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.697226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.697251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.701635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.701713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.701738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.706018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.706118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.706138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.710332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.710409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.710429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.714711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.714820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.714839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.718956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.719065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.719083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.723251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.723315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.723334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.727524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.727586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.727604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.731762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.731828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.731845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.736187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.736276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.736298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.740938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.741007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.741026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.746276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.746335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.746353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.751813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.751895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.751914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.756480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.364 [2024-12-09 17:37:31.756552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.364 [2024-12-09 17:37:31.756570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.364 [2024-12-09 17:37:31.761041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.761103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.761121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.765653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.765731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.765751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.770271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.770331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.770350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.774880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.774940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.774958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.779571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.779632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.779650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.783956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.784017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.784035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.788248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.788319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.788337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.792590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.792664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.792682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.796892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.796956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.796975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.801203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.801267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.801285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.805429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.805496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.805514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.809735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.809787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.809806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.814033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.814088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.814106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.818300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.818376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.818395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.822506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.822570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.822589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.826738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.826792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.826810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.830986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.831044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.831062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.835246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.835309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.835328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.839538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.839596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.839614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.843777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.843842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.843860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.848057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.848112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.848130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.852263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.852332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.852353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.856465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.856527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.856546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.860775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.860831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.860850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.865014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.865067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.865086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.869277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.869328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.869348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.873796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.873886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.873907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.365 [2024-12-09 17:37:31.879551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.365 [2024-12-09 17:37:31.879757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.365 [2024-12-09 17:37:31.879776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.366 [2024-12-09 17:37:31.885548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.366 [2024-12-09 17:37:31.885653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.366 [2024-12-09 17:37:31.885671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.366 [2024-12-09 17:37:31.891235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.366 [2024-12-09 17:37:31.891405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.366 [2024-12-09 17:37:31.891423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.366 [2024-12-09 17:37:31.897470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.366 [2024-12-09 17:37:31.897629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.366 [2024-12-09 17:37:31.897648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.626 [2024-12-09 17:37:31.904670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.626 [2024-12-09 17:37:31.904840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.626 [2024-12-09 17:37:31.904860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.626 [2024-12-09 17:37:31.910515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.626 [2024-12-09 17:37:31.910622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.626 [2024-12-09 17:37:31.910641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.626 [2024-12-09 17:37:31.914990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.626 [2024-12-09 17:37:31.915047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.626 [2024-12-09 17:37:31.915065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.626 [2024-12-09 17:37:31.919313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.626 [2024-12-09 17:37:31.919379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.626 [2024-12-09 17:37:31.919397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.626 [2024-12-09 17:37:31.923598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.923692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.923710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.927839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.927903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.927922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.932093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.932148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.932172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.936280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.936344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.936363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.940539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.940618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.940637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.944828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.944896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.944914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.949147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.949213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.949233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.953444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.953505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.953524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.957779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.957834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.957853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.961998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.962053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.962071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.966256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.966311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.966329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.970541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.970609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.970627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.974753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.974813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.974835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.979018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.979078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.979097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.983266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.983330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.983348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.987460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.987521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.987539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.991608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.991666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.991685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:31.995799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:31.995870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:31.995889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:32.000057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:32.000126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:32.000145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:32.004244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:32.004302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:32.004320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:32.008601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:32.008690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:32.008708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:32.013150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:32.013239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:32.013257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:32.018419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:32.018472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:32.018491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:32.023642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:32.023743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:32.023762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:32.028919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:32.028974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:32.028993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:32.033889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:32.033953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:32.033970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:32.038815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:32.038881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:32.038900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.627 [2024-12-09 17:37:32.043438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.627 [2024-12-09 17:37:32.043492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.627 [2024-12-09 17:37:32.043510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.047933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.048022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.048041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.052311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.052367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.052386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.056827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.056965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.056984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.061588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.061656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.061675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.066434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.066499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.066517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.071067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.071123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.071142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.075705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.075761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.075779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.080282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.080339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.080358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.084865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.084922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.084941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.089509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.089636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.089654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.094010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.094080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.094101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.098471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.098530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.098548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.102854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.102952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.102970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.107509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.107559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.107577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.112547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.112651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.112669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.117787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.117844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.117862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.123468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.123575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.123594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.128068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.128156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.128179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.132819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.132919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.132938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.138096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.138157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.138180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.143312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.143370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.143389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.147859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.147916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.147934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.152308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.152370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.152388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.156649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.156701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.156719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.628 [2024-12-09 17:37:32.161255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.628 [2024-12-09 17:37:32.161308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-12-09 17:37:32.161327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.165949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.166004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.166023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.170623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.170719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.170738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.175245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.175311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.175329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.179594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.179658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.179676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.184382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.184489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.184508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.189412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.189501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.189520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.194547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.194599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.194617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.200137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.200200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.200218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.206031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.206121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.206141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.211433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.211509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.211529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.216248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.216304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.216323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.220648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.220749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.220770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.225013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.225064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.225082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.229359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.229420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.229438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.233753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.233853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.233871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.238129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.238197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.238216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.242491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.242554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.242572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.246796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.246869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.246888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.251084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.251157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.251180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.255450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.255523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.255541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.259775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.889 [2024-12-09 17:37:32.259840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-12-09 17:37:32.259859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.889 [2024-12-09 17:37:32.264136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.264198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.264216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.268445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.268512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.268530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.272772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.272847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.272866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.277061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.277137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.277156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.281382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.281451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.281469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.285720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.285786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.285804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.289996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.290049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.290067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.294393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.294451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.294469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.298662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.298725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.298743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.303021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.303077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.303096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.307318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.307392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.307411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.311725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.311781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.311799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.316416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.316473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.316491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.321981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.322040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.322058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.326963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.327017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.327035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.331988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.332044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.332062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.337421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.337481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.337503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.342048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.342107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.342125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.346699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.346772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.346791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.351141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.351202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.351220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.355567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.355620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.355638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.360272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.360379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.360397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.364827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.364922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.364940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.369470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.369586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.369605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.374051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.374112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.374130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.378797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.378866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.378886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.383506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.383560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.383579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.890 [2024-12-09 17:37:32.388352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.890 [2024-12-09 17:37:32.388429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-12-09 17:37:32.388448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.891 [2024-12-09 17:37:32.393254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.891 [2024-12-09 17:37:32.393331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-12-09 17:37:32.393349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.891 [2024-12-09 17:37:32.397953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.891 [2024-12-09 17:37:32.398016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-12-09 17:37:32.398034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.891 [2024-12-09 17:37:32.402859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.891 [2024-12-09 17:37:32.402935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-12-09 17:37:32.402953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.891 [2024-12-09 17:37:32.407648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.891 [2024-12-09 17:37:32.407733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-12-09 17:37:32.407752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.891 [2024-12-09 17:37:32.412352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.891 [2024-12-09 17:37:32.412404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-12-09 17:37:32.412422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.891 [2024-12-09 17:37:32.417030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.891 [2024-12-09 17:37:32.417084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-12-09 17:37:32.417102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.891 [2024-12-09 17:37:32.421545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.891 [2024-12-09 17:37:32.421618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-12-09 17:37:32.421636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.891 [2024-12-09 17:37:32.426191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:05.891 [2024-12-09 17:37:32.426245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-12-09 17:37:32.426264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.151 [2024-12-09 17:37:32.430907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.151 [2024-12-09 17:37:32.431001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.151 [2024-12-09 17:37:32.431019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.151 [2024-12-09 17:37:32.435856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.151 [2024-12-09 17:37:32.435986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.151 [2024-12-09 17:37:32.436005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.151 [2024-12-09 17:37:32.441357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.151 [2024-12-09 17:37:32.441425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.151 [2024-12-09 17:37:32.441442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.151 [2024-12-09 17:37:32.446365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.151 [2024-12-09 17:37:32.446421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.151 [2024-12-09 17:37:32.446439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.151 [2024-12-09 17:37:32.452095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.151 [2024-12-09 17:37:32.452234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.151 [2024-12-09 17:37:32.452252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.151 [2024-12-09 17:37:32.457520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.457597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.457619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.462647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.462717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.462740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.467382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.467443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.467462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.472069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.472207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.472227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.477577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.477717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.477735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.482753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.482830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.482849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.488494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.488573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.488592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.493624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.493696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.493714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.499153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.499240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.499259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.504264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.504315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.504333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.509469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.509528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.509546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.514876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.514932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.514950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.520034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.520089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.520108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.525627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.525720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.525738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.530671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.530740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.530759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.535482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.535585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.535603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.540110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.540186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.540204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.545152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.545228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.545247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.550566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.550674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.550693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.555439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.555494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.555512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.560120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.560242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.560261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.564796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.564856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.564874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.569149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.569231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.569251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.573874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.573927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.573945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.578833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.578945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.578964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.584106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.584160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.584184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.589038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.589113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.589133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.152 [2024-12-09 17:37:32.594462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.152 [2024-12-09 17:37:32.594603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.152 [2024-12-09 17:37:32.594625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.599365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.599501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.599520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.604304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.604364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.604382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.608890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.608951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.608969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.613564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.613617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.613635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.618233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.618289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.618307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.622907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.622958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.622976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.627682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.627735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.627753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.632350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.632424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.632442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.636980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.637049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.637068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.642025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.642080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.642098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.646823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.646895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.646913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.652281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.652335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.652353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.657232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.657295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.657313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.662022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.662092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.662111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.666905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.666960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.666978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.672335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.672468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.672487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.677527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.677590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.677611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.682430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.682511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.682530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.153 [2024-12-09 17:37:32.687062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.153 [2024-12-09 17:37:32.687191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.153 [2024-12-09 17:37:32.687210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.414 6556.00 IOPS, 819.50 MiB/s [2024-12-09T16:37:32.954Z] [2024-12-09 17:37:32.692460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.692629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.692649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.696559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.696764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.696785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.700577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.700782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.700802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.704609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.704810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.704830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.708905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.709107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.709126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.713158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.713390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.713410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.717528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.717729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.717752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.721357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.721565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.721586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.725246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.725442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.725460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.729125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.729336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.729354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.732979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.733180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.733198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.736865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.737070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.737091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.740788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.740980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.740999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.744827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.745030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.745050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.748907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.749111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.749130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.752988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.753199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.753219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.757009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.757226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.757245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.761021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.761229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.761247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.764987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.765207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.765226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.768971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.769181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.769200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.773002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.773222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.773240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.777127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.777358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.777379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.781207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.781411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.781431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.785161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.785378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.785396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.789180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.789383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.414 [2024-12-09 17:37:32.789401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.414 [2024-12-09 17:37:32.793127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.414 [2024-12-09 17:37:32.793350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.793371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.796933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.797130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.797149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.800758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.800957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.800976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.804561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.804763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.804781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.808420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.808627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.808648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.812563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.812767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.812787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.817124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.817325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.817344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.821744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.821946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.821970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.825772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.825969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.825987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.829877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.830081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.830100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.833992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.834199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.834218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.838075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.838302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.838322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.842178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.842383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.842402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.846216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.846412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.846430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.850083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.850296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.850315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.854137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.854341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.854361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.858434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.858637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.858658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.863400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.863608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.863629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.867602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.867798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.867817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.871686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.871896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.871917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.875824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.876030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.876050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.879901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.880099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.880118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.884409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.884635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.884655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.889527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.889807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.889828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.895085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.895402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.895423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.900977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.901236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.901258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.908417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.908656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.908677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.914520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.914810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.415 [2024-12-09 17:37:32.914831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.415 [2024-12-09 17:37:32.920485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.415 [2024-12-09 17:37:32.920904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.416 [2024-12-09 17:37:32.920925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.416 [2024-12-09 17:37:32.927096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.416 [2024-12-09 17:37:32.927333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.416 [2024-12-09 17:37:32.927354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.416 [2024-12-09 17:37:32.933072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.416 [2024-12-09 17:37:32.933282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.416 [2024-12-09 17:37:32.933301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.416 [2024-12-09 17:37:32.939598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.416 [2024-12-09 17:37:32.939792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.416 [2024-12-09 17:37:32.939812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.416 [2024-12-09 17:37:32.946415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.416 [2024-12-09 17:37:32.946675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.416 [2024-12-09 17:37:32.946698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:32.952460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:32.952705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:32.952730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:32.958675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:32.958896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:32.958915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:32.964782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:32.965087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:32.965108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:32.970251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:32.970511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:32.970532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:32.975705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:32.975970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:32.975990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:32.980629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:32.980864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:32.980885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:32.985154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:32.985314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:32.985333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:32.990759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:32.990976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:32.990995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:32.997083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:32.997287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:32.997306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:33.003072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:33.003358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:33.003379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:33.010104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:33.010304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:33.010323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:33.016142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:33.016414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:33.016434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:33.022492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:33.022811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:33.022831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:33.028851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:33.029121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:33.029142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:33.035163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:33.035339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:33.035357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:33.041564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:33.041792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:33.041813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:33.048326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:33.048503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:33.048522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:33.055207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:33.055421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:33.055442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:33.061923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.676 [2024-12-09 17:37:33.062105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.676 [2024-12-09 17:37:33.062125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.676 [2024-12-09 17:37:33.068775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.068937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.068956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.075433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.075696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.075717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.082004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.082334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.082355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.089007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.089291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.089311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.095622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.095860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.095881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.101906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.102180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.102200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.108824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.109036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.109056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.115407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.115690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.115714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.122204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.122482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.122502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.128289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.128514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.128535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.134289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.134503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.134523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.140627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.140875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.140896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.146359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.146621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.146642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.152860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.153121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.153142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.159038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.159320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.159341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.165024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.165281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.165300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.171342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.171637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.171658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.177967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.178236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.178257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.184006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.184304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.184326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.190612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.190834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.190855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.196594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.196804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.196823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.202339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.202608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.202628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.677 [2024-12-09 17:37:33.208703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.677 [2024-12-09 17:37:33.208919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.677 [2024-12-09 17:37:33.208939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.938 [2024-12-09 17:37:33.215034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.938 [2024-12-09 17:37:33.215316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.938 [2024-12-09 17:37:33.215336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.938 [2024-12-09 17:37:33.221016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.938 [2024-12-09 17:37:33.221241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.938 [2024-12-09 17:37:33.221262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.938 [2024-12-09 17:37:33.227638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.938 [2024-12-09 17:37:33.227825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.938 [2024-12-09 17:37:33.227844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.938 [2024-12-09 17:37:33.233195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.938 [2024-12-09 17:37:33.233428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.938 [2024-12-09 17:37:33.233449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.938 [2024-12-09 17:37:33.237447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.938 [2024-12-09 17:37:33.237627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.938 [2024-12-09 17:37:33.237646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.938 [2024-12-09 17:37:33.241528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.938 [2024-12-09 17:37:33.241711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.938 [2024-12-09 17:37:33.241729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.938 [2024-12-09 17:37:33.245563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.938 [2024-12-09 17:37:33.245751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.938 [2024-12-09 17:37:33.245771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.938 [2024-12-09 17:37:33.249653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.938 [2024-12-09 17:37:33.249838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.938 [2024-12-09 17:37:33.249858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.938 [2024-12-09 17:37:33.255176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.938 [2024-12-09 17:37:33.255386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.938 [2024-12-09 17:37:33.255407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.938 [2024-12-09 17:37:33.260357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.938 [2024-12-09 17:37:33.260555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.938 [2024-12-09 17:37:33.260573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.938 [2024-12-09 17:37:33.264485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.938 [2024-12-09 17:37:33.264671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.264695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.268608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.268790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.268811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.272634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.272827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.272847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.276727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.276916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.276934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.280823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.281010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.281028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.284911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.285099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.285118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.289184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.289423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.289442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.293813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.293988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.294007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.298517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.298738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.298758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.302380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.302571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.302591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.306144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.306356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.306375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.309926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.310119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.310138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.313698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.313890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.313908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.317441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.317629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.317649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.321148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.321353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.321371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.324893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.325083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.325103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.328631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.328835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.328856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.332357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.332546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.332566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.336052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.336246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.336264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.339777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.339972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.339990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.343455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.343659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.343679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.347123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.347330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.347348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.350840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.351044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.351063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.354521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.354723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.354742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.358237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.358435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.358456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.361960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.362155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.362179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.365619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.365812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.365834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.369325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.939 [2024-12-09 17:37:33.369516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.939 [2024-12-09 17:37:33.369535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.939 [2024-12-09 17:37:33.372964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.373154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.373178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.376621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.376814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.376833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.380662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.380831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.380852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.384419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.384579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.384597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.388490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.388660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.388679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.393684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.393818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.393839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.397707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.397858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.397876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.401652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.401813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.401832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.405636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.405796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.405814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.409742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.409891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.409910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.413741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.413923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.413941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.417725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.417887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.417905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.421672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.421866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.421884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.426160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.426336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.426354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.430055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.430224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.430243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.433865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.434034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.434052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.437721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.437889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.437907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.441518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.441694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.441712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.445329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.445510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.445528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.449090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.449267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.449285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.452870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.453022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.453041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.456679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.456846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.456864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.460455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.460626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.460644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.464333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.464493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.464514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.468412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.468676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.468702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.940 [2024-12-09 17:37:33.473439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:06.940 [2024-12-09 17:37:33.473712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.940 [2024-12-09 17:37:33.473734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.201 [2024-12-09 17:37:33.478031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.201 [2024-12-09 17:37:33.478207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.201 [2024-12-09 17:37:33.478226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.201 [2024-12-09 17:37:33.483027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.201 [2024-12-09 17:37:33.483311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.201 [2024-12-09 17:37:33.483332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.201 [2024-12-09 17:37:33.488100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.201 [2024-12-09 17:37:33.488368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.201 [2024-12-09 17:37:33.488388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.201 [2024-12-09 17:37:33.493222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.201 [2024-12-09 17:37:33.493374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.201 [2024-12-09 17:37:33.493393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.201 [2024-12-09 17:37:33.498358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.201 [2024-12-09 17:37:33.498571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.201 [2024-12-09 17:37:33.498590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.201 [2024-12-09 17:37:33.503964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.201 [2024-12-09 17:37:33.504286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.504306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.509210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.509347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.509366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.514612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.514861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.514881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.519749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.519914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.519933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.524881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.525140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.525161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.530461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.530714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.530734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.535744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.535989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.536009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.540761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.541013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.541034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.546014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.546292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.546314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.551031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.551314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.551334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.556431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.556721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.556742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.561610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.561864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.561885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.566778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.566936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.566954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.571644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.571828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.571847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.575744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.575919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.575937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.580007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.580154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.580179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.584237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.584381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.584400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.588336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.588484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.588503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.592350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.592531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.592550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.596509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.596654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.596676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.600627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.600781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.600800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.604597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.604808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.604828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.608885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.609052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.609070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.613668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.613816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.613835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.617437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.617617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.617636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.621232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.621407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.621425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.625028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.625198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.625216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.629129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.629272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.202 [2024-12-09 17:37:33.629291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.202 [2024-12-09 17:37:33.634068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.202 [2024-12-09 17:37:33.634216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.634235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.638027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.638142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.638160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.642046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.642183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.642201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.645932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.646054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.646073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.649836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.649967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.649985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.653740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.653869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.653887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.657749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.657880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.657899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.661706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.661836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.661855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.665618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.665766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.665784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.669594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.669725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.669743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.673595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.673720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.673739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.677712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.677892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.677910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.682465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.682591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.682610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.203 [2024-12-09 17:37:33.687014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.687132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.687150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.203 6553.50 IOPS, 819.19 MiB/s [2024-12-09T16:37:33.743Z] [2024-12-09 17:37:33.693393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e6b870) with pdu=0x200016eff3c8 00:27:07.203 [2024-12-09 17:37:33.693532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.203 [2024-12-09 17:37:33.693551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.203 00:27:07.203 Latency(us) 00:27:07.203 [2024-12-09T16:37:33.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.203 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:07.203 nvme0n1 : 2.00 6549.72 818.71 0.00 0.00 2438.21 1732.02 7833.11 00:27:07.203 [2024-12-09T16:37:33.743Z] =================================================================================================================== 00:27:07.203 [2024-12-09T16:37:33.743Z] Total : 6549.72 818.71 0.00 0.00 2438.21 1732.02 7833.11 00:27:07.203 { 00:27:07.203 "results": [ 00:27:07.203 { 00:27:07.203 "job": "nvme0n1", 00:27:07.203 "core_mask": "0x2", 00:27:07.203 "workload": "randwrite", 00:27:07.203 "status": "finished", 00:27:07.203 "queue_depth": 16, 00:27:07.203 "io_size": 131072, 00:27:07.203 "runtime": 2.003598, 00:27:07.203 "iops": 6549.71705901084, 00:27:07.203 "mibps": 818.714632376355, 00:27:07.203 "io_failed": 0, 00:27:07.203 "io_timeout": 0, 00:27:07.203 "avg_latency_us": 2438.210277992474, 00:27:07.203 "min_latency_us": 1732.0228571428572, 00:27:07.203 "max_latency_us": 7833.112380952381 00:27:07.203 } 00:27:07.203 ], 00:27:07.203 "core_count": 1 00:27:07.203 } 00:27:07.203 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:07.203 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:07.203 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:07.203 | .driver_specific 00:27:07.203 | .nvme_error 00:27:07.203 | .status_code 00:27:07.203 | .command_transient_transport_error' 00:27:07.203 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:07.462 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 424 > 0 )) 00:27:07.462 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2050727 00:27:07.462 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2050727 ']' 00:27:07.462 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2050727 00:27:07.462 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:07.462 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.462 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2050727 00:27:07.462 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:07.462 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:07.462 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2050727' 00:27:07.462 killing process with pid 2050727 00:27:07.462 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2050727 00:27:07.462 Received shutdown signal, test time was about 2.000000 seconds 00:27:07.462 00:27:07.462 Latency(us) 00:27:07.462 [2024-12-09T16:37:34.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.462 [2024-12-09T16:37:34.002Z] =================================================================================================================== 00:27:07.462 [2024-12-09T16:37:34.002Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:07.462 17:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2050727 00:27:07.721 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2049087 00:27:07.721 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2049087 ']' 00:27:07.721 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2049087 00:27:07.721 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:07.721 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.721 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2049087 00:27:07.721 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:07.721 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:07.721 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2049087' 00:27:07.721 killing process with pid 2049087 00:27:07.721 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2049087 00:27:07.721 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2049087 00:27:07.981 00:27:07.981 real 0m13.910s 00:27:07.981 user 0m26.583s 00:27:07.981 sys 0m4.605s 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.981 ************************************ 00:27:07.981 END TEST nvmf_digest_error 00:27:07.981 ************************************ 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:07.981 rmmod nvme_tcp 00:27:07.981 rmmod nvme_fabrics 00:27:07.981 rmmod nvme_keyring 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2049087 ']' 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2049087 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2049087 ']' 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2049087 00:27:07.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2049087) - No such process 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2049087 is not found' 00:27:07.981 Process with pid 2049087 is not found 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.981 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.519 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.519 00:27:10.519 real 0m36.245s 00:27:10.519 user 0m55.008s 00:27:10.519 sys 0m13.744s 00:27:10.519 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.519 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:10.519 ************************************ 00:27:10.519 END TEST nvmf_digest 00:27:10.519 ************************************ 00:27:10.519 17:37:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:10.519 17:37:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:10.519 17:37:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:10.519 17:37:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:10.519 17:37:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:10.519 17:37:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.519 17:37:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.519 ************************************ 00:27:10.519 START TEST nvmf_bdevperf 00:27:10.519 ************************************ 00:27:10.519 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:10.519 * Looking for test storage... 00:27:10.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:10.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.520 --rc genhtml_branch_coverage=1 00:27:10.520 --rc genhtml_function_coverage=1 00:27:10.520 --rc genhtml_legend=1 00:27:10.520 --rc geninfo_all_blocks=1 00:27:10.520 --rc geninfo_unexecuted_blocks=1 00:27:10.520 00:27:10.520 ' 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:10.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.520 --rc genhtml_branch_coverage=1 00:27:10.520 --rc genhtml_function_coverage=1 00:27:10.520 --rc genhtml_legend=1 00:27:10.520 --rc geninfo_all_blocks=1 00:27:10.520 --rc geninfo_unexecuted_blocks=1 00:27:10.520 00:27:10.520 ' 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:10.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.520 --rc genhtml_branch_coverage=1 00:27:10.520 --rc genhtml_function_coverage=1 00:27:10.520 --rc genhtml_legend=1 00:27:10.520 --rc geninfo_all_blocks=1 00:27:10.520 --rc geninfo_unexecuted_blocks=1 00:27:10.520 00:27:10.520 ' 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:10.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.520 --rc genhtml_branch_coverage=1 00:27:10.520 --rc genhtml_function_coverage=1 00:27:10.520 --rc genhtml_legend=1 00:27:10.520 --rc geninfo_all_blocks=1 00:27:10.520 --rc geninfo_unexecuted_blocks=1 00:27:10.520 00:27:10.520 ' 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.520 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:10.521 17:37:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:17.092 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:17.092 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:17.092 Found net devices under 0000:af:00.0: cvl_0_0 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:17.092 Found net devices under 0000:af:00.1: cvl_0_1 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.092 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:17.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:27:17.093 00:27:17.093 --- 10.0.0.2 ping statistics --- 00:27:17.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.093 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:27:17.093 00:27:17.093 --- 10.0.0.1 ping statistics --- 00:27:17.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.093 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2054765 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2054765 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2054765 ']' 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.093 [2024-12-09 17:37:42.725911] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:27:17.093 [2024-12-09 17:37:42.725955] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.093 [2024-12-09 17:37:42.803012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:17.093 [2024-12-09 17:37:42.843505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.093 [2024-12-09 17:37:42.843543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.093 [2024-12-09 17:37:42.843550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.093 [2024-12-09 17:37:42.843556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.093 [2024-12-09 17:37:42.843561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.093 [2024-12-09 17:37:42.844908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.093 [2024-12-09 17:37:42.845019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.093 [2024-12-09 17:37:42.845020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.093 [2024-12-09 17:37:42.981964] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.093 17:37:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.093 Malloc0 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.093 [2024-12-09 17:37:43.051638] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.093 { 00:27:17.093 "params": { 00:27:17.093 "name": "Nvme$subsystem", 00:27:17.093 "trtype": "$TEST_TRANSPORT", 00:27:17.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.093 "adrfam": "ipv4", 00:27:17.093 "trsvcid": "$NVMF_PORT", 00:27:17.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.093 "hdgst": ${hdgst:-false}, 00:27:17.093 "ddgst": ${ddgst:-false} 00:27:17.093 }, 00:27:17.093 "method": "bdev_nvme_attach_controller" 00:27:17.093 } 00:27:17.093 EOF 00:27:17.093 )") 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:17.093 17:37:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:17.093 "params": { 00:27:17.093 "name": "Nvme1", 00:27:17.093 "trtype": "tcp", 00:27:17.093 "traddr": "10.0.0.2", 00:27:17.093 "adrfam": "ipv4", 00:27:17.093 "trsvcid": "4420", 00:27:17.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:17.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:17.093 "hdgst": false, 00:27:17.093 "ddgst": false 00:27:17.093 }, 00:27:17.093 "method": "bdev_nvme_attach_controller" 00:27:17.093 }' 00:27:17.093 [2024-12-09 17:37:43.103372] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:27:17.093 [2024-12-09 17:37:43.103416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054900 ] 00:27:17.093 [2024-12-09 17:37:43.179083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.093 [2024-12-09 17:37:43.218859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.093 Running I/O for 1 seconds... 00:27:18.030 11344.00 IOPS, 44.31 MiB/s 00:27:18.030 Latency(us) 00:27:18.030 [2024-12-09T16:37:44.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.030 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:18.030 Verification LBA range: start 0x0 length 0x4000 00:27:18.030 Nvme1n1 : 1.01 11434.86 44.67 0.00 0.00 11134.35 928.43 13668.94 00:27:18.030 [2024-12-09T16:37:44.570Z] =================================================================================================================== 00:27:18.030 [2024-12-09T16:37:44.570Z] Total : 11434.86 44.67 0.00 0.00 11134.35 928.43 13668.94 00:27:18.289 17:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2055133 00:27:18.289 17:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:18.289 17:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:18.289 17:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:18.289 17:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:18.289 17:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:18.289 17:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:18.289 17:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:18.289 { 00:27:18.289 "params": { 00:27:18.289 "name": "Nvme$subsystem", 00:27:18.289 "trtype": "$TEST_TRANSPORT", 00:27:18.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.289 "adrfam": "ipv4", 00:27:18.289 "trsvcid": "$NVMF_PORT", 00:27:18.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.289 "hdgst": ${hdgst:-false}, 00:27:18.289 "ddgst": ${ddgst:-false} 00:27:18.289 }, 00:27:18.289 "method": "bdev_nvme_attach_controller" 00:27:18.289 } 00:27:18.289 EOF 00:27:18.289 )") 00:27:18.289 17:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:18.289 17:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:18.289 17:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:18.289 17:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:18.289 "params": { 00:27:18.289 "name": "Nvme1", 00:27:18.289 "trtype": "tcp", 00:27:18.289 "traddr": "10.0.0.2", 00:27:18.290 "adrfam": "ipv4", 00:27:18.290 "trsvcid": "4420", 00:27:18.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:18.290 "hdgst": false, 00:27:18.290 "ddgst": false 00:27:18.290 }, 00:27:18.290 "method": "bdev_nvme_attach_controller" 00:27:18.290 }' 00:27:18.290 [2024-12-09 17:37:44.713511] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:27:18.290 [2024-12-09 17:37:44.713557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2055133 ] 00:27:18.290 [2024-12-09 17:37:44.788526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.290 [2024-12-09 17:37:44.825039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.548 Running I/O for 15 seconds... 00:27:20.860 11110.00 IOPS, 43.40 MiB/s [2024-12-09T16:37:47.978Z] 11311.50 IOPS, 44.19 MiB/s [2024-12-09T16:37:47.978Z] 17:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2054765 00:27:21.438 17:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:21.438 [2024-12-09 17:37:47.682294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.438 [2024-12-09 17:37:47.682337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.438 [2024-12-09 17:37:47.682354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.438 [2024-12-09 17:37:47.682363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.438 [2024-12-09 17:37:47.682372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.438 [2024-12-09 17:37:47.682380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.438 [2024-12-09 17:37:47.682389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.438 [2024-12-09 17:37:47.682396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.438 [2024-12-09 17:37:47.682405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.438 [2024-12-09 17:37:47.682412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.438 [2024-12-09 17:37:47.682420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.438 [2024-12-09 17:37:47.682427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.438 [2024-12-09 17:37:47.682435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.438 [2024-12-09 17:37:47.682448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.438 [2024-12-09 17:37:47.682456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.438 [2024-12-09 17:37:47.682464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.438 [2024-12-09 17:37:47.682478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.438 [2024-12-09 17:37:47.682485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.438 [2024-12-09 17:37:47.682493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.438 [2024-12-09 17:37:47.682501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.438 [2024-12-09 17:37:47.682509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.438 [2024-12-09 17:37:47.682517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.438 [2024-12-09 17:37:47.682528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.438 [2024-12-09 17:37:47.682537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.682990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.682996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.683004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.683010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.683018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.683025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.683033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.683041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.683050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.683056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.683064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.683070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.683078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.683085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.683093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.683100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.683109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.683115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.683123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.683129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.683138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.683144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.683153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.439 [2024-12-09 17:37:47.683160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.439 [2024-12-09 17:37:47.683282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.440 [2024-12-09 17:37:47.683885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.440 [2024-12-09 17:37:47.683891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.683899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.683906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.683914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.683921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.683930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.441 [2024-12-09 17:37:47.683936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.683944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.441 [2024-12-09 17:37:47.683950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.683960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.683967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.683975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.683981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.683988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.683994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.441 [2024-12-09 17:37:47.684389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.684396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5fd40 is same with the state(6) to be set 00:27:21.441 [2024-12-09 17:37:47.684406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:21.441 [2024-12-09 17:37:47.684411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:21.441 [2024-12-09 17:37:47.684416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113864 len:8 PRP1 0x0 PRP2 0x0 00:27:21.441 [2024-12-09 17:37:47.684424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.441 [2024-12-09 17:37:47.687313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.441 [2024-12-09 17:37:47.687369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.441 [2024-12-09 17:37:47.687884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.441 [2024-12-09 17:37:47.687900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.441 [2024-12-09 17:37:47.687908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.441 [2024-12-09 17:37:47.688083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.441 [2024-12-09 17:37:47.688266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.441 [2024-12-09 17:37:47.688278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.441 [2024-12-09 17:37:47.688286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.441 [2024-12-09 17:37:47.688295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.442 [2024-12-09 17:37:47.700481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.442 [2024-12-09 17:37:47.700850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.442 [2024-12-09 17:37:47.700903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.442 [2024-12-09 17:37:47.700928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.442 [2024-12-09 17:37:47.701529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.442 [2024-12-09 17:37:47.702003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.442 [2024-12-09 17:37:47.702012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.442 [2024-12-09 17:37:47.702019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.442 [2024-12-09 17:37:47.702025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.442 [2024-12-09 17:37:47.713300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.442 [2024-12-09 17:37:47.713751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.442 [2024-12-09 17:37:47.713796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.442 [2024-12-09 17:37:47.713819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.442 [2024-12-09 17:37:47.714243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.442 [2024-12-09 17:37:47.714414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.442 [2024-12-09 17:37:47.714423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.442 [2024-12-09 17:37:47.714430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.442 [2024-12-09 17:37:47.714436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.442 [2024-12-09 17:37:47.726151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.442 [2024-12-09 17:37:47.726571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.442 [2024-12-09 17:37:47.726590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.442 [2024-12-09 17:37:47.726598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.442 [2024-12-09 17:37:47.726758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.442 [2024-12-09 17:37:47.726918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.442 [2024-12-09 17:37:47.726927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.442 [2024-12-09 17:37:47.726934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.442 [2024-12-09 17:37:47.726943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.442 [2024-12-09 17:37:47.739073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.442 [2024-12-09 17:37:47.739448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.442 [2024-12-09 17:37:47.739466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.442 [2024-12-09 17:37:47.739474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.442 [2024-12-09 17:37:47.739643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.442 [2024-12-09 17:37:47.739812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.442 [2024-12-09 17:37:47.739822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.442 [2024-12-09 17:37:47.739829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.442 [2024-12-09 17:37:47.739835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.442 [2024-12-09 17:37:47.751810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.442 [2024-12-09 17:37:47.752249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.442 [2024-12-09 17:37:47.752296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.442 [2024-12-09 17:37:47.752321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.442 [2024-12-09 17:37:47.752903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.442 [2024-12-09 17:37:47.753091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.442 [2024-12-09 17:37:47.753099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.442 [2024-12-09 17:37:47.753105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.442 [2024-12-09 17:37:47.753111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.442 [2024-12-09 17:37:47.764549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.442 [2024-12-09 17:37:47.764963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.442 [2024-12-09 17:37:47.764979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.442 [2024-12-09 17:37:47.764987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.442 [2024-12-09 17:37:47.765147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.442 [2024-12-09 17:37:47.765312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.442 [2024-12-09 17:37:47.765322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.442 [2024-12-09 17:37:47.765329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.442 [2024-12-09 17:37:47.765336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.442 [2024-12-09 17:37:47.777484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.442 [2024-12-09 17:37:47.777895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.442 [2024-12-09 17:37:47.777911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.442 [2024-12-09 17:37:47.777919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.442 [2024-12-09 17:37:47.778087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.442 [2024-12-09 17:37:47.778265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.442 [2024-12-09 17:37:47.778275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.442 [2024-12-09 17:37:47.778282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.442 [2024-12-09 17:37:47.778288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.442 [2024-12-09 17:37:47.790250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.442 [2024-12-09 17:37:47.790679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.442 [2024-12-09 17:37:47.790724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.442 [2024-12-09 17:37:47.790748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.442 [2024-12-09 17:37:47.791238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.442 [2024-12-09 17:37:47.791408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.442 [2024-12-09 17:37:47.791418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.442 [2024-12-09 17:37:47.791425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.442 [2024-12-09 17:37:47.791431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.442 [2024-12-09 17:37:47.802993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.442 [2024-12-09 17:37:47.803342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.442 [2024-12-09 17:37:47.803359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.442 [2024-12-09 17:37:47.803366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.442 [2024-12-09 17:37:47.803525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.442 [2024-12-09 17:37:47.803685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.442 [2024-12-09 17:37:47.803694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.442 [2024-12-09 17:37:47.803700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.442 [2024-12-09 17:37:47.803706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.442 [2024-12-09 17:37:47.815877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.442 [2024-12-09 17:37:47.816311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.442 [2024-12-09 17:37:47.816357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.442 [2024-12-09 17:37:47.816381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.442 [2024-12-09 17:37:47.816940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.442 [2024-12-09 17:37:47.817101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.443 [2024-12-09 17:37:47.817109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.443 [2024-12-09 17:37:47.817115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.443 [2024-12-09 17:37:47.817121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.443 [2024-12-09 17:37:47.830976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.443 [2024-12-09 17:37:47.831430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.443 [2024-12-09 17:37:47.831453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.443 [2024-12-09 17:37:47.831463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.443 [2024-12-09 17:37:47.831718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.443 [2024-12-09 17:37:47.831974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.443 [2024-12-09 17:37:47.831987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.443 [2024-12-09 17:37:47.831997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.443 [2024-12-09 17:37:47.832007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.443 [2024-12-09 17:37:47.843868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.443 [2024-12-09 17:37:47.844276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.443 [2024-12-09 17:37:47.844294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.443 [2024-12-09 17:37:47.844301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.443 [2024-12-09 17:37:47.844469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.443 [2024-12-09 17:37:47.844638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.443 [2024-12-09 17:37:47.844648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.443 [2024-12-09 17:37:47.844654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.443 [2024-12-09 17:37:47.844661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.443 [2024-12-09 17:37:47.856797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.443 [2024-12-09 17:37:47.857191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.443 [2024-12-09 17:37:47.857207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.443 [2024-12-09 17:37:47.857215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.443 [2024-12-09 17:37:47.857373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.443 [2024-12-09 17:37:47.857532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.443 [2024-12-09 17:37:47.857544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.443 [2024-12-09 17:37:47.857551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.443 [2024-12-09 17:37:47.857557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.443 [2024-12-09 17:37:47.869598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.443 [2024-12-09 17:37:47.870012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.443 [2024-12-09 17:37:47.870029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.443 [2024-12-09 17:37:47.870036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.443 [2024-12-09 17:37:47.870201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.443 [2024-12-09 17:37:47.870361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.443 [2024-12-09 17:37:47.870370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.443 [2024-12-09 17:37:47.870377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.443 [2024-12-09 17:37:47.870383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.443 [2024-12-09 17:37:47.882416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.443 [2024-12-09 17:37:47.882844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.443 [2024-12-09 17:37:47.882890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.443 [2024-12-09 17:37:47.882913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.443 [2024-12-09 17:37:47.883452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.443 [2024-12-09 17:37:47.883623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.443 [2024-12-09 17:37:47.883633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.443 [2024-12-09 17:37:47.883639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.443 [2024-12-09 17:37:47.883645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.443 [2024-12-09 17:37:47.895282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.443 [2024-12-09 17:37:47.895623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.443 [2024-12-09 17:37:47.895640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.443 [2024-12-09 17:37:47.895646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.443 [2024-12-09 17:37:47.895805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.443 [2024-12-09 17:37:47.895964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.443 [2024-12-09 17:37:47.895973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.443 [2024-12-09 17:37:47.895980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.443 [2024-12-09 17:37:47.895989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.443 [2024-12-09 17:37:47.908065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.443 [2024-12-09 17:37:47.908481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.443 [2024-12-09 17:37:47.908498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.443 [2024-12-09 17:37:47.908506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.443 [2024-12-09 17:37:47.908665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.443 [2024-12-09 17:37:47.908824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.443 [2024-12-09 17:37:47.908834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.443 [2024-12-09 17:37:47.908840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.443 [2024-12-09 17:37:47.908846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.443 [2024-12-09 17:37:47.920890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.443 [2024-12-09 17:37:47.921311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.443 [2024-12-09 17:37:47.921328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.443 [2024-12-09 17:37:47.921335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.443 [2024-12-09 17:37:47.921495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.443 [2024-12-09 17:37:47.921655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.443 [2024-12-09 17:37:47.921665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.443 [2024-12-09 17:37:47.921671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.443 [2024-12-09 17:37:47.921678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.443 [2024-12-09 17:37:47.933760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.443 [2024-12-09 17:37:47.934192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.443 [2024-12-09 17:37:47.934238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.443 [2024-12-09 17:37:47.934261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.443 [2024-12-09 17:37:47.934698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.443 [2024-12-09 17:37:47.934867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.443 [2024-12-09 17:37:47.934875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.444 [2024-12-09 17:37:47.934882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.444 [2024-12-09 17:37:47.934888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.444 [2024-12-09 17:37:47.946776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.444 [2024-12-09 17:37:47.947130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.444 [2024-12-09 17:37:47.947152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.444 [2024-12-09 17:37:47.947160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.444 [2024-12-09 17:37:47.947341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.444 [2024-12-09 17:37:47.947516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.444 [2024-12-09 17:37:47.947525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.444 [2024-12-09 17:37:47.947532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.444 [2024-12-09 17:37:47.947539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.444 [2024-12-09 17:37:47.959889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.444 [2024-12-09 17:37:47.960321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.444 [2024-12-09 17:37:47.960339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.444 [2024-12-09 17:37:47.960346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.444 [2024-12-09 17:37:47.960514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.444 [2024-12-09 17:37:47.960682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.444 [2024-12-09 17:37:47.960691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.444 [2024-12-09 17:37:47.960698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.444 [2024-12-09 17:37:47.960704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.444 [2024-12-09 17:37:47.972969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.444 [2024-12-09 17:37:47.973384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.444 [2024-12-09 17:37:47.973402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.444 [2024-12-09 17:37:47.973410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.444 [2024-12-09 17:37:47.973583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.444 [2024-12-09 17:37:47.973756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.444 [2024-12-09 17:37:47.973765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.444 [2024-12-09 17:37:47.973772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.444 [2024-12-09 17:37:47.973779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.703 [2024-12-09 17:37:47.985875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.703 [2024-12-09 17:37:47.986302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.703 [2024-12-09 17:37:47.986320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.703 [2024-12-09 17:37:47.986328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.703 [2024-12-09 17:37:47.986505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.703 [2024-12-09 17:37:47.986680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.703 [2024-12-09 17:37:47.986689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.703 [2024-12-09 17:37:47.986696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.703 [2024-12-09 17:37:47.986703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.703 10164.00 IOPS, 39.70 MiB/s [2024-12-09T16:37:48.243Z] [2024-12-09 17:37:47.998730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.703 [2024-12-09 17:37:47.999136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.703 [2024-12-09 17:37:47.999154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.703 [2024-12-09 17:37:47.999161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.703 [2024-12-09 17:37:47.999359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.703 [2024-12-09 17:37:47.999520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.703 [2024-12-09 17:37:47.999529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.703 [2024-12-09 17:37:47.999536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.703 [2024-12-09 17:37:47.999541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.703 [2024-12-09 17:37:48.011551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.703 [2024-12-09 17:37:48.011973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.703 [2024-12-09 17:37:48.012017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.703 [2024-12-09 17:37:48.012040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.703 [2024-12-09 17:37:48.012638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.703 [2024-12-09 17:37:48.012948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.703 [2024-12-09 17:37:48.012957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.703 [2024-12-09 17:37:48.012963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.703 [2024-12-09 17:37:48.012969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.703 [2024-12-09 17:37:48.024494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.703 [2024-12-09 17:37:48.024892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.703 [2024-12-09 17:37:48.024908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.703 [2024-12-09 17:37:48.024915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.703 [2024-12-09 17:37:48.025074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.703 [2024-12-09 17:37:48.025258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.703 [2024-12-09 17:37:48.025274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.703 [2024-12-09 17:37:48.025280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.703 [2024-12-09 17:37:48.025287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.703 [2024-12-09 17:37:48.037314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.703 [2024-12-09 17:37:48.037643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.703 [2024-12-09 17:37:48.037660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.703 [2024-12-09 17:37:48.037667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.703 [2024-12-09 17:37:48.037827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.703 [2024-12-09 17:37:48.037986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.703 [2024-12-09 17:37:48.037996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.703 [2024-12-09 17:37:48.038002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.703 [2024-12-09 17:37:48.038008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.703 [2024-12-09 17:37:48.050306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.703 [2024-12-09 17:37:48.050729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.703 [2024-12-09 17:37:48.050784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.703 [2024-12-09 17:37:48.050808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.703 [2024-12-09 17:37:48.051404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.703 [2024-12-09 17:37:48.051943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.703 [2024-12-09 17:37:48.051952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.704 [2024-12-09 17:37:48.051958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.704 [2024-12-09 17:37:48.051964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.704 [2024-12-09 17:37:48.063180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.704 [2024-12-09 17:37:48.063602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.704 [2024-12-09 17:37:48.063654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.704 [2024-12-09 17:37:48.063682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.704 [2024-12-09 17:37:48.064280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.704 [2024-12-09 17:37:48.064835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.704 [2024-12-09 17:37:48.064845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.704 [2024-12-09 17:37:48.064851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.704 [2024-12-09 17:37:48.064863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.704 [2024-12-09 17:37:48.076070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.704 [2024-12-09 17:37:48.076415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.704 [2024-12-09 17:37:48.076432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.704 [2024-12-09 17:37:48.076440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.704 [2024-12-09 17:37:48.076599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.704 [2024-12-09 17:37:48.076759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.704 [2024-12-09 17:37:48.076769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.704 [2024-12-09 17:37:48.076775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.704 [2024-12-09 17:37:48.076781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.704 [2024-12-09 17:37:48.089036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.704 [2024-12-09 17:37:48.089465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.704 [2024-12-09 17:37:48.089511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.704 [2024-12-09 17:37:48.089536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.704 [2024-12-09 17:37:48.090037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.704 [2024-12-09 17:37:48.090224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.704 [2024-12-09 17:37:48.090235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.704 [2024-12-09 17:37:48.090242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.704 [2024-12-09 17:37:48.090249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.704 [2024-12-09 17:37:48.104076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.704 [2024-12-09 17:37:48.104465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.704 [2024-12-09 17:37:48.104488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.704 [2024-12-09 17:37:48.104499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.704 [2024-12-09 17:37:48.104753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.704 [2024-12-09 17:37:48.105010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.704 [2024-12-09 17:37:48.105023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.704 [2024-12-09 17:37:48.105033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.704 [2024-12-09 17:37:48.105043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.704 [2024-12-09 17:37:48.117017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.704 [2024-12-09 17:37:48.117386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.704 [2024-12-09 17:37:48.117408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.704 [2024-12-09 17:37:48.117415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.704 [2024-12-09 17:37:48.117583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.704 [2024-12-09 17:37:48.117752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.704 [2024-12-09 17:37:48.117762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.704 [2024-12-09 17:37:48.117768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.704 [2024-12-09 17:37:48.117775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.704 [2024-12-09 17:37:48.130045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.704 [2024-12-09 17:37:48.130404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.704 [2024-12-09 17:37:48.130423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.704 [2024-12-09 17:37:48.130432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.704 [2024-12-09 17:37:48.130605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.704 [2024-12-09 17:37:48.130778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.704 [2024-12-09 17:37:48.130788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.704 [2024-12-09 17:37:48.130796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.704 [2024-12-09 17:37:48.130803] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.704 [2024-12-09 17:37:48.143059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.704 [2024-12-09 17:37:48.143908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.704 [2024-12-09 17:37:48.143932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.704 [2024-12-09 17:37:48.143941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.704 [2024-12-09 17:37:48.144108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.704 [2024-12-09 17:37:48.144278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.704 [2024-12-09 17:37:48.144288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.704 [2024-12-09 17:37:48.144294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.704 [2024-12-09 17:37:48.144301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.704 [2024-12-09 17:37:48.155939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.704 [2024-12-09 17:37:48.156324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.704 [2024-12-09 17:37:48.156343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.704 [2024-12-09 17:37:48.156351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.704 [2024-12-09 17:37:48.156515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.704 [2024-12-09 17:37:48.156674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.704 [2024-12-09 17:37:48.156683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.704 [2024-12-09 17:37:48.156690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.704 [2024-12-09 17:37:48.156697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.704 [2024-12-09 17:37:48.168867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.704 [2024-12-09 17:37:48.169188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.704 [2024-12-09 17:37:48.169205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.704 [2024-12-09 17:37:48.169213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.704 [2024-12-09 17:37:48.169372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.704 [2024-12-09 17:37:48.169532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.704 [2024-12-09 17:37:48.169541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.704 [2024-12-09 17:37:48.169547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.704 [2024-12-09 17:37:48.169553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.704 [2024-12-09 17:37:48.181757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.704 [2024-12-09 17:37:48.182102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.704 [2024-12-09 17:37:48.182119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.704 [2024-12-09 17:37:48.182127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.704 [2024-12-09 17:37:48.182295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.704 [2024-12-09 17:37:48.182454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.704 [2024-12-09 17:37:48.182464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.704 [2024-12-09 17:37:48.182470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.705 [2024-12-09 17:37:48.182476] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.705 [2024-12-09 17:37:48.194678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.705 [2024-12-09 17:37:48.195046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.705 [2024-12-09 17:37:48.195063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.705 [2024-12-09 17:37:48.195070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.705 [2024-12-09 17:37:48.195255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.705 [2024-12-09 17:37:48.195425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.705 [2024-12-09 17:37:48.195439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.705 [2024-12-09 17:37:48.195446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.705 [2024-12-09 17:37:48.195453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.705 [2024-12-09 17:37:48.207718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.705 [2024-12-09 17:37:48.208107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.705 [2024-12-09 17:37:48.208126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.705 [2024-12-09 17:37:48.208134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.705 [2024-12-09 17:37:48.208314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.705 [2024-12-09 17:37:48.208488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.705 [2024-12-09 17:37:48.208499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.705 [2024-12-09 17:37:48.208505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.705 [2024-12-09 17:37:48.208512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.705 [2024-12-09 17:37:48.220733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.705 [2024-12-09 17:37:48.221077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.705 [2024-12-09 17:37:48.221095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.705 [2024-12-09 17:37:48.221103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.705 [2024-12-09 17:37:48.221284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.705 [2024-12-09 17:37:48.221458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.705 [2024-12-09 17:37:48.221468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.705 [2024-12-09 17:37:48.221475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.705 [2024-12-09 17:37:48.221482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.705 [2024-12-09 17:37:48.233965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.705 [2024-12-09 17:37:48.234404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.705 [2024-12-09 17:37:48.234423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.705 [2024-12-09 17:37:48.234431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.705 [2024-12-09 17:37:48.234615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.705 [2024-12-09 17:37:48.234800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.705 [2024-12-09 17:37:48.234810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.705 [2024-12-09 17:37:48.234817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.705 [2024-12-09 17:37:48.234825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.965 [2024-12-09 17:37:48.247202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.965 [2024-12-09 17:37:48.247642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.965 [2024-12-09 17:37:48.247660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.965 [2024-12-09 17:37:48.247669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.965 [2024-12-09 17:37:48.247853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.965 [2024-12-09 17:37:48.248038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.965 [2024-12-09 17:37:48.248048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.965 [2024-12-09 17:37:48.248055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.965 [2024-12-09 17:37:48.248062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.965 [2024-12-09 17:37:48.260299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.965 [2024-12-09 17:37:48.260749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.965 [2024-12-09 17:37:48.260767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.965 [2024-12-09 17:37:48.260775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.965 [2024-12-09 17:37:48.260958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.965 [2024-12-09 17:37:48.261143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.965 [2024-12-09 17:37:48.261153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.965 [2024-12-09 17:37:48.261161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.965 [2024-12-09 17:37:48.261173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.965 [2024-12-09 17:37:48.273535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.965 [2024-12-09 17:37:48.273975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.965 [2024-12-09 17:37:48.273994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.965 [2024-12-09 17:37:48.274002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.965 [2024-12-09 17:37:48.274213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.965 [2024-12-09 17:37:48.274412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.965 [2024-12-09 17:37:48.274422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.965 [2024-12-09 17:37:48.274431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.965 [2024-12-09 17:37:48.274438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.965 [2024-12-09 17:37:48.286781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.965 [2024-12-09 17:37:48.287228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.965 [2024-12-09 17:37:48.287250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.965 [2024-12-09 17:37:48.287259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.965 [2024-12-09 17:37:48.287443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.965 [2024-12-09 17:37:48.287629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.965 [2024-12-09 17:37:48.287640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.965 [2024-12-09 17:37:48.287648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.965 [2024-12-09 17:37:48.287656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.965 [2024-12-09 17:37:48.299930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.965 [2024-12-09 17:37:48.300373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.965 [2024-12-09 17:37:48.300391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.965 [2024-12-09 17:37:48.300400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.965 [2024-12-09 17:37:48.300583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.965 [2024-12-09 17:37:48.300767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.965 [2024-12-09 17:37:48.300778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.965 [2024-12-09 17:37:48.300785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.965 [2024-12-09 17:37:48.300792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.965 [2024-12-09 17:37:48.312897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.965 [2024-12-09 17:37:48.313303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.965 [2024-12-09 17:37:48.313320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.965 [2024-12-09 17:37:48.313328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.965 [2024-12-09 17:37:48.313502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.965 [2024-12-09 17:37:48.313676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.965 [2024-12-09 17:37:48.313687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.965 [2024-12-09 17:37:48.313696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.965 [2024-12-09 17:37:48.313703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.965 [2024-12-09 17:37:48.325922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.965 [2024-12-09 17:37:48.326350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.965 [2024-12-09 17:37:48.326368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.965 [2024-12-09 17:37:48.326376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.965 [2024-12-09 17:37:48.326553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.965 [2024-12-09 17:37:48.326727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.965 [2024-12-09 17:37:48.326736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.965 [2024-12-09 17:37:48.326743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.965 [2024-12-09 17:37:48.326751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.965 [2024-12-09 17:37:48.339306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.966 [2024-12-09 17:37:48.339675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.966 [2024-12-09 17:37:48.339694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.966 [2024-12-09 17:37:48.339702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.966 [2024-12-09 17:37:48.339898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.966 [2024-12-09 17:37:48.340095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.966 [2024-12-09 17:37:48.340105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.966 [2024-12-09 17:37:48.340113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.966 [2024-12-09 17:37:48.340122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.966 [2024-12-09 17:37:48.352591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.966 [2024-12-09 17:37:48.353030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.966 [2024-12-09 17:37:48.353048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.966 [2024-12-09 17:37:48.353055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.966 [2024-12-09 17:37:48.353245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.966 [2024-12-09 17:37:48.353429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.966 [2024-12-09 17:37:48.353439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.966 [2024-12-09 17:37:48.353446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.966 [2024-12-09 17:37:48.353453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.966 [2024-12-09 17:37:48.365732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.966 [2024-12-09 17:37:48.366175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.966 [2024-12-09 17:37:48.366193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.966 [2024-12-09 17:37:48.366201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.966 [2024-12-09 17:37:48.366385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.966 [2024-12-09 17:37:48.366571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.966 [2024-12-09 17:37:48.366581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.966 [2024-12-09 17:37:48.366591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.966 [2024-12-09 17:37:48.366599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.966 [2024-12-09 17:37:48.379195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.966 [2024-12-09 17:37:48.379566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.966 [2024-12-09 17:37:48.379583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.966 [2024-12-09 17:37:48.379591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.966 [2024-12-09 17:37:48.379775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.966 [2024-12-09 17:37:48.379960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.966 [2024-12-09 17:37:48.379970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.966 [2024-12-09 17:37:48.379978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.966 [2024-12-09 17:37:48.379985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.966 [2024-12-09 17:37:48.392527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.966 [2024-12-09 17:37:48.392909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.966 [2024-12-09 17:37:48.392927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.966 [2024-12-09 17:37:48.392936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.966 [2024-12-09 17:37:48.393119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.966 [2024-12-09 17:37:48.393310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.966 [2024-12-09 17:37:48.393320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.966 [2024-12-09 17:37:48.393327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.966 [2024-12-09 17:37:48.393335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.966 [2024-12-09 17:37:48.405684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.966 [2024-12-09 17:37:48.406125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.966 [2024-12-09 17:37:48.406143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.966 [2024-12-09 17:37:48.406151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.966 [2024-12-09 17:37:48.406341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.966 [2024-12-09 17:37:48.406526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.966 [2024-12-09 17:37:48.406536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.966 [2024-12-09 17:37:48.406543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.966 [2024-12-09 17:37:48.406550] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.966 [2024-12-09 17:37:48.418980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.966 [2024-12-09 17:37:48.419402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.966 [2024-12-09 17:37:48.419420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.966 [2024-12-09 17:37:48.419428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.966 [2024-12-09 17:37:48.419601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.966 [2024-12-09 17:37:48.419776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.966 [2024-12-09 17:37:48.419785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.966 [2024-12-09 17:37:48.419792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.966 [2024-12-09 17:37:48.419799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.966 [2024-12-09 17:37:48.432185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.966 [2024-12-09 17:37:48.432625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.966 [2024-12-09 17:37:48.432643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.966 [2024-12-09 17:37:48.432651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.966 [2024-12-09 17:37:48.432834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.966 [2024-12-09 17:37:48.433018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.966 [2024-12-09 17:37:48.433029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.966 [2024-12-09 17:37:48.433036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.966 [2024-12-09 17:37:48.433043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.966 [2024-12-09 17:37:48.445275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.966 [2024-12-09 17:37:48.445623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.966 [2024-12-09 17:37:48.445640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.966 [2024-12-09 17:37:48.445648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.966 [2024-12-09 17:37:48.445821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.966 [2024-12-09 17:37:48.445994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.966 [2024-12-09 17:37:48.446005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.966 [2024-12-09 17:37:48.446015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.966 [2024-12-09 17:37:48.446024] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.966 [2024-12-09 17:37:48.458527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.966 [2024-12-09 17:37:48.458843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.966 [2024-12-09 17:37:48.458864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.966 [2024-12-09 17:37:48.458876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.966 [2024-12-09 17:37:48.459061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.966 [2024-12-09 17:37:48.459252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.966 [2024-12-09 17:37:48.459264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.966 [2024-12-09 17:37:48.459271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.966 [2024-12-09 17:37:48.459278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.966 [2024-12-09 17:37:48.471728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.966 [2024-12-09 17:37:48.472123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.966 [2024-12-09 17:37:48.472142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.967 [2024-12-09 17:37:48.472150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.967 [2024-12-09 17:37:48.472340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.967 [2024-12-09 17:37:48.472526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.967 [2024-12-09 17:37:48.472536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.967 [2024-12-09 17:37:48.472543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.967 [2024-12-09 17:37:48.472551] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.967 [2024-12-09 17:37:48.485010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.967 [2024-12-09 17:37:48.485377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.967 [2024-12-09 17:37:48.485421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.967 [2024-12-09 17:37:48.485445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.967 [2024-12-09 17:37:48.485981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.967 [2024-12-09 17:37:48.486171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.967 [2024-12-09 17:37:48.486182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.967 [2024-12-09 17:37:48.486189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.967 [2024-12-09 17:37:48.486197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.967 [2024-12-09 17:37:48.498094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.967 [2024-12-09 17:37:48.498518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.967 [2024-12-09 17:37:48.498573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:21.967 [2024-12-09 17:37:48.498598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:21.967 [2024-12-09 17:37:48.499191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:21.967 [2024-12-09 17:37:48.499787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.967 [2024-12-09 17:37:48.499813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.967 [2024-12-09 17:37:48.499835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.967 [2024-12-09 17:37:48.499855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.226 [2024-12-09 17:37:48.511141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.226 [2024-12-09 17:37:48.511477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-12-09 17:37:48.511495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.226 [2024-12-09 17:37:48.511502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.226 [2024-12-09 17:37:48.511676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.226 [2024-12-09 17:37:48.511850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.226 [2024-12-09 17:37:48.511860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.226 [2024-12-09 17:37:48.511866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.226 [2024-12-09 17:37:48.511874] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.226 [2024-12-09 17:37:48.524017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.226 [2024-12-09 17:37:48.524420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-12-09 17:37:48.524437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.226 [2024-12-09 17:37:48.524445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.226 [2024-12-09 17:37:48.524612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.226 [2024-12-09 17:37:48.524782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.226 [2024-12-09 17:37:48.524792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.226 [2024-12-09 17:37:48.524799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.226 [2024-12-09 17:37:48.524805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.226 [2024-12-09 17:37:48.536810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.226 [2024-12-09 17:37:48.537223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-12-09 17:37:48.537240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.226 [2024-12-09 17:37:48.537248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.226 [2024-12-09 17:37:48.537407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.226 [2024-12-09 17:37:48.537566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.227 [2024-12-09 17:37:48.537576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.227 [2024-12-09 17:37:48.537587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.227 [2024-12-09 17:37:48.537594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.227 [2024-12-09 17:37:48.549674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.227 [2024-12-09 17:37:48.550020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-12-09 17:37:48.550065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.227 [2024-12-09 17:37:48.550091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.227 [2024-12-09 17:37:48.550689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.227 [2024-12-09 17:37:48.551161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.227 [2024-12-09 17:37:48.551174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.227 [2024-12-09 17:37:48.551181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.227 [2024-12-09 17:37:48.551187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.227 [2024-12-09 17:37:48.562488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.227 [2024-12-09 17:37:48.562905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-12-09 17:37:48.562922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.227 [2024-12-09 17:37:48.562930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.227 [2024-12-09 17:37:48.563088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.227 [2024-12-09 17:37:48.563252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.227 [2024-12-09 17:37:48.563262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.227 [2024-12-09 17:37:48.563268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.227 [2024-12-09 17:37:48.563275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.227 [2024-12-09 17:37:48.575221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.227 [2024-12-09 17:37:48.575487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-12-09 17:37:48.575504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.227 [2024-12-09 17:37:48.575511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.227 [2024-12-09 17:37:48.575670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.227 [2024-12-09 17:37:48.575830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.227 [2024-12-09 17:37:48.575839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.227 [2024-12-09 17:37:48.575845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.227 [2024-12-09 17:37:48.575851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.227 [2024-12-09 17:37:48.587980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.227 [2024-12-09 17:37:48.588381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-12-09 17:37:48.588398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.227 [2024-12-09 17:37:48.588405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.227 [2024-12-09 17:37:48.588564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.227 [2024-12-09 17:37:48.588724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.227 [2024-12-09 17:37:48.588733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.227 [2024-12-09 17:37:48.588739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.227 [2024-12-09 17:37:48.588746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.227 [2024-12-09 17:37:48.600710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.227 [2024-12-09 17:37:48.601132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-12-09 17:37:48.601185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.227 [2024-12-09 17:37:48.601209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.227 [2024-12-09 17:37:48.601791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.227 [2024-12-09 17:37:48.602242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.227 [2024-12-09 17:37:48.602251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.227 [2024-12-09 17:37:48.602258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.227 [2024-12-09 17:37:48.602264] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.227 [2024-12-09 17:37:48.613466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.227 [2024-12-09 17:37:48.613786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-12-09 17:37:48.613802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.227 [2024-12-09 17:37:48.613810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.227 [2024-12-09 17:37:48.613969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.227 [2024-12-09 17:37:48.614128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.227 [2024-12-09 17:37:48.614137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.227 [2024-12-09 17:37:48.614143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.227 [2024-12-09 17:37:48.614149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.227 [2024-12-09 17:37:48.626260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.227 [2024-12-09 17:37:48.626681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-12-09 17:37:48.626697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.227 [2024-12-09 17:37:48.626707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.227 [2024-12-09 17:37:48.626867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.227 [2024-12-09 17:37:48.627027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.227 [2024-12-09 17:37:48.627036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.227 [2024-12-09 17:37:48.627042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.227 [2024-12-09 17:37:48.627048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.227 [2024-12-09 17:37:48.639041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.227 [2024-12-09 17:37:48.639460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-12-09 17:37:48.639509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.227 [2024-12-09 17:37:48.639532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.227 [2024-12-09 17:37:48.640123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.227 [2024-12-09 17:37:48.640289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.227 [2024-12-09 17:37:48.640299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.227 [2024-12-09 17:37:48.640305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.227 [2024-12-09 17:37:48.640312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.227 [2024-12-09 17:37:48.651786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.227 [2024-12-09 17:37:48.652202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-12-09 17:37:48.652220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.227 [2024-12-09 17:37:48.652227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.227 [2024-12-09 17:37:48.652825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.227 [2024-12-09 17:37:48.653376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.227 [2024-12-09 17:37:48.653385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.227 [2024-12-09 17:37:48.653392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.227 [2024-12-09 17:37:48.653399] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.227 [2024-12-09 17:37:48.664543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.227 [2024-12-09 17:37:48.664953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-12-09 17:37:48.664970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.227 [2024-12-09 17:37:48.664977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.227 [2024-12-09 17:37:48.665136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.227 [2024-12-09 17:37:48.665305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.227 [2024-12-09 17:37:48.665315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.228 [2024-12-09 17:37:48.665321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.228 [2024-12-09 17:37:48.665327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.228 [2024-12-09 17:37:48.677500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.228 [2024-12-09 17:37:48.677908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-12-09 17:37:48.677925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.228 [2024-12-09 17:37:48.677932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.228 [2024-12-09 17:37:48.678091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.228 [2024-12-09 17:37:48.678275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.228 [2024-12-09 17:37:48.678284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.228 [2024-12-09 17:37:48.678291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.228 [2024-12-09 17:37:48.678298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.228 [2024-12-09 17:37:48.690226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.228 [2024-12-09 17:37:48.690595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-12-09 17:37:48.690611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.228 [2024-12-09 17:37:48.690618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.228 [2024-12-09 17:37:48.690776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.228 [2024-12-09 17:37:48.690936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.228 [2024-12-09 17:37:48.690945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.228 [2024-12-09 17:37:48.690951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.228 [2024-12-09 17:37:48.690957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.228 [2024-12-09 17:37:48.702945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.228 [2024-12-09 17:37:48.703386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-12-09 17:37:48.703433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.228 [2024-12-09 17:37:48.703456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.228 [2024-12-09 17:37:48.704038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.228 [2024-12-09 17:37:48.704651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.228 [2024-12-09 17:37:48.704663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.228 [2024-12-09 17:37:48.704672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.228 [2024-12-09 17:37:48.704680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.228 [2024-12-09 17:37:48.715948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.228 [2024-12-09 17:37:48.716357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-12-09 17:37:48.716374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.228 [2024-12-09 17:37:48.716382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.228 [2024-12-09 17:37:48.716541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.228 [2024-12-09 17:37:48.716701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.228 [2024-12-09 17:37:48.716710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.228 [2024-12-09 17:37:48.716717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.228 [2024-12-09 17:37:48.716723] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.228 [2024-12-09 17:37:48.728687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.228 [2024-12-09 17:37:48.729110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-12-09 17:37:48.729156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.228 [2024-12-09 17:37:48.729198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.228 [2024-12-09 17:37:48.729659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.228 [2024-12-09 17:37:48.729819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.228 [2024-12-09 17:37:48.729828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.228 [2024-12-09 17:37:48.729834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.228 [2024-12-09 17:37:48.729840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.228 [2024-12-09 17:37:48.741496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.228 [2024-12-09 17:37:48.741904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-12-09 17:37:48.741920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.228 [2024-12-09 17:37:48.741927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.228 [2024-12-09 17:37:48.742087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.228 [2024-12-09 17:37:48.742251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.228 [2024-12-09 17:37:48.742261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.228 [2024-12-09 17:37:48.742267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.228 [2024-12-09 17:37:48.742274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.228 [2024-12-09 17:37:48.754248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.228 [2024-12-09 17:37:48.754572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-12-09 17:37:48.754589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.228 [2024-12-09 17:37:48.754596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.228 [2024-12-09 17:37:48.754756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.228 [2024-12-09 17:37:48.754915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.228 [2024-12-09 17:37:48.754925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.228 [2024-12-09 17:37:48.754931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.228 [2024-12-09 17:37:48.754937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.488 [2024-12-09 17:37:48.767233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.488 [2024-12-09 17:37:48.767579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.488 [2024-12-09 17:37:48.767596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.488 [2024-12-09 17:37:48.767604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.488 [2024-12-09 17:37:48.767771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.488 [2024-12-09 17:37:48.767939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.488 [2024-12-09 17:37:48.767949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.488 [2024-12-09 17:37:48.767956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.488 [2024-12-09 17:37:48.767962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.488 [2024-12-09 17:37:48.780175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.488 [2024-12-09 17:37:48.780592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.488 [2024-12-09 17:37:48.780628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.488 [2024-12-09 17:37:48.780653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.488 [2024-12-09 17:37:48.781253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.488 [2024-12-09 17:37:48.781783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.488 [2024-12-09 17:37:48.781793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.488 [2024-12-09 17:37:48.781799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.488 [2024-12-09 17:37:48.781805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.488 [2024-12-09 17:37:48.793017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.488 [2024-12-09 17:37:48.793412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.488 [2024-12-09 17:37:48.793458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.488 [2024-12-09 17:37:48.793490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.488 [2024-12-09 17:37:48.793880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.488 [2024-12-09 17:37:48.794041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.488 [2024-12-09 17:37:48.794050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.488 [2024-12-09 17:37:48.794056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.488 [2024-12-09 17:37:48.794063] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.488 [2024-12-09 17:37:48.805876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.488 [2024-12-09 17:37:48.806275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.489 [2024-12-09 17:37:48.806292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.489 [2024-12-09 17:37:48.806299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.489 [2024-12-09 17:37:48.806457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.489 [2024-12-09 17:37:48.806617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.489 [2024-12-09 17:37:48.806626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.489 [2024-12-09 17:37:48.806632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.489 [2024-12-09 17:37:48.806638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.489 [2024-12-09 17:37:48.818750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.489 [2024-12-09 17:37:48.819082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.489 [2024-12-09 17:37:48.819121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.489 [2024-12-09 17:37:48.819146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.489 [2024-12-09 17:37:48.819680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.489 [2024-12-09 17:37:48.819841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.489 [2024-12-09 17:37:48.819851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.489 [2024-12-09 17:37:48.819857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.489 [2024-12-09 17:37:48.819863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.489 [2024-12-09 17:37:48.831523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.489 [2024-12-09 17:37:48.831938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.489 [2024-12-09 17:37:48.831983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.489 [2024-12-09 17:37:48.832007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.489 [2024-12-09 17:37:48.832610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.489 [2024-12-09 17:37:48.832774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.489 [2024-12-09 17:37:48.832783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.489 [2024-12-09 17:37:48.832790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.489 [2024-12-09 17:37:48.832796] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.489 [2024-12-09 17:37:48.844391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.489 [2024-12-09 17:37:48.844732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.489 [2024-12-09 17:37:48.844748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.489 [2024-12-09 17:37:48.844755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.489 [2024-12-09 17:37:48.844913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.489 [2024-12-09 17:37:48.845073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.489 [2024-12-09 17:37:48.845082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.489 [2024-12-09 17:37:48.845088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.489 [2024-12-09 17:37:48.845094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.489 [2024-12-09 17:37:48.857136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.489 [2024-12-09 17:37:48.857496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.489 [2024-12-09 17:37:48.857540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.489 [2024-12-09 17:37:48.857564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.489 [2024-12-09 17:37:48.858145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.489 [2024-12-09 17:37:48.858746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.489 [2024-12-09 17:37:48.858776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.489 [2024-12-09 17:37:48.858783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.489 [2024-12-09 17:37:48.858789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.489 [2024-12-09 17:37:48.872311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.489 [2024-12-09 17:37:48.872767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.489 [2024-12-09 17:37:48.872789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.489 [2024-12-09 17:37:48.872799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.489 [2024-12-09 17:37:48.873053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.489 [2024-12-09 17:37:48.873317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.489 [2024-12-09 17:37:48.873330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.489 [2024-12-09 17:37:48.873344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.489 [2024-12-09 17:37:48.873354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.489 [2024-12-09 17:37:48.885344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.489 [2024-12-09 17:37:48.885605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.489 [2024-12-09 17:37:48.885622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.489 [2024-12-09 17:37:48.885630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.489 [2024-12-09 17:37:48.885798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.489 [2024-12-09 17:37:48.885968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.489 [2024-12-09 17:37:48.885977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.489 [2024-12-09 17:37:48.885984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.489 [2024-12-09 17:37:48.885990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.489 [2024-12-09 17:37:48.898159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.489 [2024-12-09 17:37:48.898553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.489 [2024-12-09 17:37:48.898569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.489 [2024-12-09 17:37:48.898576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.489 [2024-12-09 17:37:48.898735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.489 [2024-12-09 17:37:48.898895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.489 [2024-12-09 17:37:48.898904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.489 [2024-12-09 17:37:48.898911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.489 [2024-12-09 17:37:48.898917] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.489 [2024-12-09 17:37:48.910922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.489 [2024-12-09 17:37:48.911334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.489 [2024-12-09 17:37:48.911351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.489 [2024-12-09 17:37:48.911358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.489 [2024-12-09 17:37:48.911517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.489 [2024-12-09 17:37:48.911677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.489 [2024-12-09 17:37:48.911686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.489 [2024-12-09 17:37:48.911692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.489 [2024-12-09 17:37:48.911699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.489 [2024-12-09 17:37:48.923667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.489 [2024-12-09 17:37:48.924071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.489 [2024-12-09 17:37:48.924114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.489 [2024-12-09 17:37:48.924137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.489 [2024-12-09 17:37:48.924691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.489 [2024-12-09 17:37:48.924853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.489 [2024-12-09 17:37:48.924862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.489 [2024-12-09 17:37:48.924870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.489 [2024-12-09 17:37:48.924877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.489 [2024-12-09 17:37:48.936536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.489 [2024-12-09 17:37:48.936960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.489 [2024-12-09 17:37:48.937004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.489 [2024-12-09 17:37:48.937027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.490 [2024-12-09 17:37:48.937626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.490 [2024-12-09 17:37:48.938066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.490 [2024-12-09 17:37:48.938075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.490 [2024-12-09 17:37:48.938081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.490 [2024-12-09 17:37:48.938087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.490 [2024-12-09 17:37:48.949324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.490 [2024-12-09 17:37:48.949637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.490 [2024-12-09 17:37:48.949682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.490 [2024-12-09 17:37:48.949706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.490 [2024-12-09 17:37:48.950226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.490 [2024-12-09 17:37:48.950397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.490 [2024-12-09 17:37:48.950406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.490 [2024-12-09 17:37:48.950413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.490 [2024-12-09 17:37:48.950419] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.490 [2024-12-09 17:37:48.962091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.490 [2024-12-09 17:37:48.962445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.490 [2024-12-09 17:37:48.962461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.490 [2024-12-09 17:37:48.962471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.490 [2024-12-09 17:37:48.962632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.490 [2024-12-09 17:37:48.962791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.490 [2024-12-09 17:37:48.962800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.490 [2024-12-09 17:37:48.962807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.490 [2024-12-09 17:37:48.962814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.490 [2024-12-09 17:37:48.975150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.490 [2024-12-09 17:37:48.975569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.490 [2024-12-09 17:37:48.975587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.490 [2024-12-09 17:37:48.975595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.490 [2024-12-09 17:37:48.975754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.490 [2024-12-09 17:37:48.975913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.490 [2024-12-09 17:37:48.975922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.490 [2024-12-09 17:37:48.975929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.490 [2024-12-09 17:37:48.975935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.490 [2024-12-09 17:37:48.987916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.490 [2024-12-09 17:37:48.988332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.490 [2024-12-09 17:37:48.988349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.490 [2024-12-09 17:37:48.988357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.490 [2024-12-09 17:37:48.988516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.490 [2024-12-09 17:37:48.988675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.490 [2024-12-09 17:37:48.988685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.490 [2024-12-09 17:37:48.988691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.490 [2024-12-09 17:37:48.988697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.490 7623.00 IOPS, 29.78 MiB/s [2024-12-09T16:37:49.030Z] [2024-12-09 17:37:49.000750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.490 [2024-12-09 17:37:49.001183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.490 [2024-12-09 17:37:49.001230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.490 [2024-12-09 17:37:49.001254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.490 [2024-12-09 17:37:49.001837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.490 [2024-12-09 17:37:49.002269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.490 [2024-12-09 17:37:49.002278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.490 [2024-12-09 17:37:49.002284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.490 [2024-12-09 17:37:49.002291] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.490 [2024-12-09 17:37:49.013494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.490 [2024-12-09 17:37:49.013886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.490 [2024-12-09 17:37:49.013903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.490 [2024-12-09 17:37:49.013910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.490 [2024-12-09 17:37:49.014070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.490 [2024-12-09 17:37:49.014234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.490 [2024-12-09 17:37:49.014244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.490 [2024-12-09 17:37:49.014250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.490 [2024-12-09 17:37:49.014257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.490 [2024-12-09 17:37:49.026654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.751 [2024-12-09 17:37:49.027081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.751 [2024-12-09 17:37:49.027099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.751 [2024-12-09 17:37:49.027107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.751 [2024-12-09 17:37:49.027287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.751 [2024-12-09 17:37:49.027462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.751 [2024-12-09 17:37:49.027472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.751 [2024-12-09 17:37:49.027479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.751 [2024-12-09 17:37:49.027485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.751 [2024-12-09 17:37:49.039445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.751 [2024-12-09 17:37:49.039867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.751 [2024-12-09 17:37:49.039884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.751 [2024-12-09 17:37:49.039892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.751 [2024-12-09 17:37:49.040051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.751 [2024-12-09 17:37:49.040218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.751 [2024-12-09 17:37:49.040228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.751 [2024-12-09 17:37:49.040234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.751 [2024-12-09 17:37:49.040244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.751 [2024-12-09 17:37:49.052227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.751 [2024-12-09 17:37:49.052639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.751 [2024-12-09 17:37:49.052655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.751 [2024-12-09 17:37:49.052663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.751 [2024-12-09 17:37:49.052822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.751 [2024-12-09 17:37:49.052982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.751 [2024-12-09 17:37:49.052991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.751 [2024-12-09 17:37:49.052997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.751 [2024-12-09 17:37:49.053004] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.751 [2024-12-09 17:37:49.064963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.751 [2024-12-09 17:37:49.065363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.751 [2024-12-09 17:37:49.065381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.751 [2024-12-09 17:37:49.065388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.751 [2024-12-09 17:37:49.065547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.751 [2024-12-09 17:37:49.065707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.751 [2024-12-09 17:37:49.065717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.751 [2024-12-09 17:37:49.065723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.751 [2024-12-09 17:37:49.065729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.751 [2024-12-09 17:37:49.077833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.751 [2024-12-09 17:37:49.078246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.751 [2024-12-09 17:37:49.078263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.751 [2024-12-09 17:37:49.078271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.751 [2024-12-09 17:37:49.078430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.751 [2024-12-09 17:37:49.078589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.751 [2024-12-09 17:37:49.078598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.751 [2024-12-09 17:37:49.078605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.751 [2024-12-09 17:37:49.078611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.751 [2024-12-09 17:37:49.090576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.751 [2024-12-09 17:37:49.090840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.751 [2024-12-09 17:37:49.090856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.751 [2024-12-09 17:37:49.090865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.751 [2024-12-09 17:37:49.091023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.751 [2024-12-09 17:37:49.091190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.751 [2024-12-09 17:37:49.091200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.751 [2024-12-09 17:37:49.091206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.751 [2024-12-09 17:37:49.091213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.751 [2024-12-09 17:37:49.103331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.751 [2024-12-09 17:37:49.103674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.751 [2024-12-09 17:37:49.103691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.751 [2024-12-09 17:37:49.103698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.751 [2024-12-09 17:37:49.103857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.751 [2024-12-09 17:37:49.104017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.751 [2024-12-09 17:37:49.104026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.751 [2024-12-09 17:37:49.104032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.751 [2024-12-09 17:37:49.104038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.751 [2024-12-09 17:37:49.116160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.751 [2024-12-09 17:37:49.116582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.751 [2024-12-09 17:37:49.116599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.751 [2024-12-09 17:37:49.116607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.751 [2024-12-09 17:37:49.116767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.751 [2024-12-09 17:37:49.116928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.751 [2024-12-09 17:37:49.116937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.751 [2024-12-09 17:37:49.116944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.751 [2024-12-09 17:37:49.116950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.751 [2024-12-09 17:37:49.128958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.751 [2024-12-09 17:37:49.129387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.751 [2024-12-09 17:37:49.129436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.751 [2024-12-09 17:37:49.129469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.751 [2024-12-09 17:37:49.130053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.751 [2024-12-09 17:37:49.130500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.751 [2024-12-09 17:37:49.130510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.751 [2024-12-09 17:37:49.130516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.751 [2024-12-09 17:37:49.130523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.751 [2024-12-09 17:37:49.141766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.751 [2024-12-09 17:37:49.142187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.751 [2024-12-09 17:37:49.142205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.751 [2024-12-09 17:37:49.142213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.751 [2024-12-09 17:37:49.142382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.751 [2024-12-09 17:37:49.142560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.752 [2024-12-09 17:37:49.142569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.752 [2024-12-09 17:37:49.142575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.752 [2024-12-09 17:37:49.142582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.752 [2024-12-09 17:37:49.154628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.752 [2024-12-09 17:37:49.155018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.752 [2024-12-09 17:37:49.155035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.752 [2024-12-09 17:37:49.155043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.752 [2024-12-09 17:37:49.155209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.752 [2024-12-09 17:37:49.155369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.752 [2024-12-09 17:37:49.155379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.752 [2024-12-09 17:37:49.155385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.752 [2024-12-09 17:37:49.155391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.752 [2024-12-09 17:37:49.167519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.752 [2024-12-09 17:37:49.167944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.752 [2024-12-09 17:37:49.167988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.752 [2024-12-09 17:37:49.168012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.752 [2024-12-09 17:37:49.168469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.752 [2024-12-09 17:37:49.168630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.752 [2024-12-09 17:37:49.168640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.752 [2024-12-09 17:37:49.168646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.752 [2024-12-09 17:37:49.168652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.752 [2024-12-09 17:37:49.180448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.752 [2024-12-09 17:37:49.180851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.752 [2024-12-09 17:37:49.180868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.752 [2024-12-09 17:37:49.180875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.752 [2024-12-09 17:37:49.181035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.752 [2024-12-09 17:37:49.181207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.752 [2024-12-09 17:37:49.181217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.752 [2024-12-09 17:37:49.181224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.752 [2024-12-09 17:37:49.181230] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.752 [2024-12-09 17:37:49.193169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.752 [2024-12-09 17:37:49.193488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.752 [2024-12-09 17:37:49.193505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.752 [2024-12-09 17:37:49.193513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.752 [2024-12-09 17:37:49.193673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.752 [2024-12-09 17:37:49.193833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.752 [2024-12-09 17:37:49.193843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.752 [2024-12-09 17:37:49.193849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.752 [2024-12-09 17:37:49.193856] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.752 [2024-12-09 17:37:49.205963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.752 [2024-12-09 17:37:49.206372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.752 [2024-12-09 17:37:49.206389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.752 [2024-12-09 17:37:49.206396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.752 [2024-12-09 17:37:49.206554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.752 [2024-12-09 17:37:49.206714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.752 [2024-12-09 17:37:49.206723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.752 [2024-12-09 17:37:49.206729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.752 [2024-12-09 17:37:49.206739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.752 [2024-12-09 17:37:49.218701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.752 [2024-12-09 17:37:49.219134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.752 [2024-12-09 17:37:49.219152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.752 [2024-12-09 17:37:49.219160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.752 [2024-12-09 17:37:49.219335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.752 [2024-12-09 17:37:49.219504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.752 [2024-12-09 17:37:49.219513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.752 [2024-12-09 17:37:49.219520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.752 [2024-12-09 17:37:49.219527] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.752 [2024-12-09 17:37:49.231750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.752 [2024-12-09 17:37:49.232175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.752 [2024-12-09 17:37:49.232194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.752 [2024-12-09 17:37:49.232201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.752 [2024-12-09 17:37:49.232370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.752 [2024-12-09 17:37:49.232539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.752 [2024-12-09 17:37:49.232548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.752 [2024-12-09 17:37:49.232555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.752 [2024-12-09 17:37:49.232561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.752 [2024-12-09 17:37:49.244543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.752 [2024-12-09 17:37:49.244959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.752 [2024-12-09 17:37:49.245010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.752 [2024-12-09 17:37:49.245034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.752 [2024-12-09 17:37:49.245559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.752 [2024-12-09 17:37:49.245720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.752 [2024-12-09 17:37:49.245729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.752 [2024-12-09 17:37:49.245736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.752 [2024-12-09 17:37:49.245742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.752 [2024-12-09 17:37:49.257502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.752 [2024-12-09 17:37:49.257920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.752 [2024-12-09 17:37:49.257937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.752 [2024-12-09 17:37:49.257944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.752 [2024-12-09 17:37:49.258103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.752 [2024-12-09 17:37:49.258270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.752 [2024-12-09 17:37:49.258279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.752 [2024-12-09 17:37:49.258286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.752 [2024-12-09 17:37:49.258292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.752 [2024-12-09 17:37:49.270255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.752 [2024-12-09 17:37:49.270632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.752 [2024-12-09 17:37:49.270677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.752 [2024-12-09 17:37:49.270700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.752 [2024-12-09 17:37:49.271294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.752 [2024-12-09 17:37:49.271850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.752 [2024-12-09 17:37:49.271860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.753 [2024-12-09 17:37:49.271866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.753 [2024-12-09 17:37:49.271872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.753 [2024-12-09 17:37:49.283084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.753 [2024-12-09 17:37:49.283495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.753 [2024-12-09 17:37:49.283533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:22.753 [2024-12-09 17:37:49.283559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:22.753 [2024-12-09 17:37:49.284141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:22.753 [2024-12-09 17:37:49.284731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.753 [2024-12-09 17:37:49.284741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.753 [2024-12-09 17:37:49.284747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.753 [2024-12-09 17:37:49.284754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-12-09 17:37:49.296209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-12-09 17:37:49.296633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-12-09 17:37:49.296650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-12-09 17:37:49.296658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.013 [2024-12-09 17:37:49.296829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.013 [2024-12-09 17:37:49.297001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-12-09 17:37:49.297010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-12-09 17:37:49.297016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-12-09 17:37:49.297022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-12-09 17:37:49.309070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-12-09 17:37:49.309467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-12-09 17:37:49.309484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-12-09 17:37:49.309491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.013 [2024-12-09 17:37:49.309650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.013 [2024-12-09 17:37:49.309810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-12-09 17:37:49.309819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-12-09 17:37:49.309825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-12-09 17:37:49.309832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-12-09 17:37:49.321936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-12-09 17:37:49.322329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-12-09 17:37:49.322346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-12-09 17:37:49.322354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.013 [2024-12-09 17:37:49.322513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.013 [2024-12-09 17:37:49.322673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-12-09 17:37:49.322682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-12-09 17:37:49.322688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-12-09 17:37:49.322694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-12-09 17:37:49.334800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-12-09 17:37:49.335214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-12-09 17:37:49.335232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-12-09 17:37:49.335239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.013 [2024-12-09 17:37:49.335397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.013 [2024-12-09 17:37:49.335557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-12-09 17:37:49.335569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-12-09 17:37:49.335575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-12-09 17:37:49.335581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-12-09 17:37:49.347531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-12-09 17:37:49.347922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-12-09 17:37:49.347939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-12-09 17:37:49.347946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.013 [2024-12-09 17:37:49.348105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.013 [2024-12-09 17:37:49.348273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-12-09 17:37:49.348283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-12-09 17:37:49.348289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-12-09 17:37:49.348295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-12-09 17:37:49.360332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-12-09 17:37:49.360750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-12-09 17:37:49.360793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-12-09 17:37:49.360817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.013 [2024-12-09 17:37:49.361416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.013 [2024-12-09 17:37:49.361936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-12-09 17:37:49.361945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-12-09 17:37:49.361951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-12-09 17:37:49.361957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-12-09 17:37:49.373170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-12-09 17:37:49.373519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-12-09 17:37:49.373565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-12-09 17:37:49.373588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.013 [2024-12-09 17:37:49.374022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.013 [2024-12-09 17:37:49.374198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-12-09 17:37:49.374209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-12-09 17:37:49.374216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-12-09 17:37:49.374228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-12-09 17:37:49.385912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-12-09 17:37:49.386325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-12-09 17:37:49.386342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-12-09 17:37:49.386349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.013 [2024-12-09 17:37:49.386509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.013 [2024-12-09 17:37:49.386668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-12-09 17:37:49.386678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-12-09 17:37:49.386684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-12-09 17:37:49.386690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-12-09 17:37:49.398643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-12-09 17:37:49.398999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-12-09 17:37:49.399043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-12-09 17:37:49.399067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.013 [2024-12-09 17:37:49.399665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.013 [2024-12-09 17:37:49.399891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-12-09 17:37:49.399900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-12-09 17:37:49.399906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-12-09 17:37:49.399912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-12-09 17:37:49.411410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-12-09 17:37:49.411815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-12-09 17:37:49.411832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-12-09 17:37:49.411839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.013 [2024-12-09 17:37:49.411998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.013 [2024-12-09 17:37:49.412157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-12-09 17:37:49.412173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-12-09 17:37:49.412181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-12-09 17:37:49.412188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-12-09 17:37:49.424139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-12-09 17:37:49.424552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-12-09 17:37:49.424572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-12-09 17:37:49.424579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.013 [2024-12-09 17:37:49.424739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.013 [2024-12-09 17:37:49.424898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-12-09 17:37:49.424907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-12-09 17:37:49.424913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-12-09 17:37:49.424919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-12-09 17:37:49.436876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-12-09 17:37:49.437281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-12-09 17:37:49.437326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-12-09 17:37:49.437349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.014 [2024-12-09 17:37:49.437930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.014 [2024-12-09 17:37:49.438403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-12-09 17:37:49.438413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-12-09 17:37:49.438419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-12-09 17:37:49.438425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-12-09 17:37:49.449652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-12-09 17:37:49.450067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-12-09 17:37:49.450111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-12-09 17:37:49.450134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.014 [2024-12-09 17:37:49.450734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.014 [2024-12-09 17:37:49.451219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-12-09 17:37:49.451229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-12-09 17:37:49.451235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-12-09 17:37:49.451241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-12-09 17:37:49.462382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-12-09 17:37:49.462800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-12-09 17:37:49.462844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-12-09 17:37:49.462868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.014 [2024-12-09 17:37:49.463271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.014 [2024-12-09 17:37:49.463432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-12-09 17:37:49.463442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-12-09 17:37:49.463448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-12-09 17:37:49.463455] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-12-09 17:37:49.475123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-12-09 17:37:49.475558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-12-09 17:37:49.475575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-12-09 17:37:49.475582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.014 [2024-12-09 17:37:49.475742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.014 [2024-12-09 17:37:49.475901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-12-09 17:37:49.475911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-12-09 17:37:49.475917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-12-09 17:37:49.475923] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-12-09 17:37:49.488192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-12-09 17:37:49.488624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-12-09 17:37:49.488642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-12-09 17:37:49.488650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.014 [2024-12-09 17:37:49.488823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.014 [2024-12-09 17:37:49.488998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-12-09 17:37:49.489008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-12-09 17:37:49.489015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-12-09 17:37:49.489022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-12-09 17:37:49.501243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-12-09 17:37:49.501604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-12-09 17:37:49.501621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-12-09 17:37:49.501629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.014 [2024-12-09 17:37:49.501802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.014 [2024-12-09 17:37:49.501975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-12-09 17:37:49.501989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-12-09 17:37:49.501996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-12-09 17:37:49.502002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-12-09 17:37:49.514214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-12-09 17:37:49.514548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-12-09 17:37:49.514566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-12-09 17:37:49.514574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.014 [2024-12-09 17:37:49.514747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.014 [2024-12-09 17:37:49.514920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-12-09 17:37:49.514930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-12-09 17:37:49.514937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-12-09 17:37:49.514944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-12-09 17:37:49.527334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-12-09 17:37:49.527758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-12-09 17:37:49.527776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-12-09 17:37:49.527783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.014 [2024-12-09 17:37:49.527976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.014 [2024-12-09 17:37:49.528161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-12-09 17:37:49.528177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-12-09 17:37:49.528185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-12-09 17:37:49.528193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-12-09 17:37:49.540319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-12-09 17:37:49.540724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-12-09 17:37:49.540742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-12-09 17:37:49.540749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.014 [2024-12-09 17:37:49.540922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.014 [2024-12-09 17:37:49.541097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-12-09 17:37:49.541107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-12-09 17:37:49.541113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-12-09 17:37:49.541120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.275 [2024-12-09 17:37:49.553505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.275 [2024-12-09 17:37:49.553948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.275 [2024-12-09 17:37:49.553966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.275 [2024-12-09 17:37:49.553974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.275 [2024-12-09 17:37:49.554158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.275 [2024-12-09 17:37:49.554352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.275 [2024-12-09 17:37:49.554375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.275 [2024-12-09 17:37:49.554385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.275 [2024-12-09 17:37:49.554392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.275 [2024-12-09 17:37:49.566550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.275 [2024-12-09 17:37:49.566895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.275 [2024-12-09 17:37:49.566913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.275 [2024-12-09 17:37:49.566920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.275 [2024-12-09 17:37:49.567093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.275 [2024-12-09 17:37:49.567276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.275 [2024-12-09 17:37:49.567287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.275 [2024-12-09 17:37:49.567294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.275 [2024-12-09 17:37:49.567301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.275 [2024-12-09 17:37:49.579536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.275 [2024-12-09 17:37:49.579966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.275 [2024-12-09 17:37:49.580010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.275 [2024-12-09 17:37:49.580033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.275 [2024-12-09 17:37:49.580453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.275 [2024-12-09 17:37:49.580623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.275 [2024-12-09 17:37:49.580632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.275 [2024-12-09 17:37:49.580639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.275 [2024-12-09 17:37:49.580646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.275 [2024-12-09 17:37:49.592327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.275 [2024-12-09 17:37:49.592646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.275 [2024-12-09 17:37:49.592665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.275 [2024-12-09 17:37:49.592674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.275 [2024-12-09 17:37:49.592833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.275 [2024-12-09 17:37:49.592992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.275 [2024-12-09 17:37:49.593001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.275 [2024-12-09 17:37:49.593007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.275 [2024-12-09 17:37:49.593013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.275 [2024-12-09 17:37:49.605133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.275 [2024-12-09 17:37:49.605484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.275 [2024-12-09 17:37:49.605501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.275 [2024-12-09 17:37:49.605511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.275 [2024-12-09 17:37:49.605670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.275 [2024-12-09 17:37:49.605830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.275 [2024-12-09 17:37:49.605839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.275 [2024-12-09 17:37:49.605845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.275 [2024-12-09 17:37:49.605852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.275 [2024-12-09 17:37:49.617967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.275 [2024-12-09 17:37:49.618375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.275 [2024-12-09 17:37:49.618395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.275 [2024-12-09 17:37:49.618403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.275 [2024-12-09 17:37:49.618563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.275 [2024-12-09 17:37:49.618722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.276 [2024-12-09 17:37:49.618731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.276 [2024-12-09 17:37:49.618738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.276 [2024-12-09 17:37:49.618744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.276 [2024-12-09 17:37:49.630725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.276 [2024-12-09 17:37:49.631006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.276 [2024-12-09 17:37:49.631024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.276 [2024-12-09 17:37:49.631031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.276 [2024-12-09 17:37:49.631201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.276 [2024-12-09 17:37:49.631361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.276 [2024-12-09 17:37:49.631370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.276 [2024-12-09 17:37:49.631377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.276 [2024-12-09 17:37:49.631383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.276 [2024-12-09 17:37:49.643602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.276 [2024-12-09 17:37:49.643973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.276 [2024-12-09 17:37:49.643990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.276 [2024-12-09 17:37:49.643997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.276 [2024-12-09 17:37:49.644156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.276 [2024-12-09 17:37:49.644321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.276 [2024-12-09 17:37:49.644331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.276 [2024-12-09 17:37:49.644337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.276 [2024-12-09 17:37:49.644344] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.276 [2024-12-09 17:37:49.656426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.276 [2024-12-09 17:37:49.656858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.276 [2024-12-09 17:37:49.656904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.276 [2024-12-09 17:37:49.656927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.276 [2024-12-09 17:37:49.657437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.276 [2024-12-09 17:37:49.657608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.276 [2024-12-09 17:37:49.657617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.276 [2024-12-09 17:37:49.657624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.276 [2024-12-09 17:37:49.657630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.276 [2024-12-09 17:37:49.669186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.276 [2024-12-09 17:37:49.669462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.276 [2024-12-09 17:37:49.669479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.276 [2024-12-09 17:37:49.669486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.276 [2024-12-09 17:37:49.669645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.276 [2024-12-09 17:37:49.669806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.276 [2024-12-09 17:37:49.669816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.276 [2024-12-09 17:37:49.669826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.276 [2024-12-09 17:37:49.669833] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.276 [2024-12-09 17:37:49.682200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.276 [2024-12-09 17:37:49.682539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.276 [2024-12-09 17:37:49.682557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.276 [2024-12-09 17:37:49.682564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.276 [2024-12-09 17:37:49.682732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.276 [2024-12-09 17:37:49.682907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.276 [2024-12-09 17:37:49.682917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.276 [2024-12-09 17:37:49.682923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.276 [2024-12-09 17:37:49.682930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.276 [2024-12-09 17:37:49.695028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.276 [2024-12-09 17:37:49.695378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.276 [2024-12-09 17:37:49.695396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.276 [2024-12-09 17:37:49.695404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.276 [2024-12-09 17:37:49.695564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.276 [2024-12-09 17:37:49.695723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.276 [2024-12-09 17:37:49.695731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.276 [2024-12-09 17:37:49.695737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.276 [2024-12-09 17:37:49.695743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.276 [2024-12-09 17:37:49.707811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.276 [2024-12-09 17:37:49.708173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.276 [2024-12-09 17:37:49.708191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.276 [2024-12-09 17:37:49.708199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.276 [2024-12-09 17:37:49.708358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.276 [2024-12-09 17:37:49.708518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.276 [2024-12-09 17:37:49.708527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.276 [2024-12-09 17:37:49.708534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.276 [2024-12-09 17:37:49.708540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.276 [2024-12-09 17:37:49.720678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.276 [2024-12-09 17:37:49.721051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.276 [2024-12-09 17:37:49.721069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.276 [2024-12-09 17:37:49.721076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.276 [2024-12-09 17:37:49.721245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.276 [2024-12-09 17:37:49.721405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.276 [2024-12-09 17:37:49.721414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.276 [2024-12-09 17:37:49.721420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.276 [2024-12-09 17:37:49.721427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.276 [2024-12-09 17:37:49.733508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.276 [2024-12-09 17:37:49.733903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.276 [2024-12-09 17:37:49.733921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.276 [2024-12-09 17:37:49.733929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.276 [2024-12-09 17:37:49.734097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.276 [2024-12-09 17:37:49.734271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.276 [2024-12-09 17:37:49.734281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.276 [2024-12-09 17:37:49.734288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.276 [2024-12-09 17:37:49.734295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.276 [2024-12-09 17:37:49.746521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.276 [2024-12-09 17:37:49.746886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.276 [2024-12-09 17:37:49.746903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.276 [2024-12-09 17:37:49.746911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.276 [2024-12-09 17:37:49.747079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.276 [2024-12-09 17:37:49.747252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.276 [2024-12-09 17:37:49.747263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.276 [2024-12-09 17:37:49.747269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.277 [2024-12-09 17:37:49.747276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.277 [2024-12-09 17:37:49.759463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.277 [2024-12-09 17:37:49.759793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.277 [2024-12-09 17:37:49.759814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.277 [2024-12-09 17:37:49.759822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.277 [2024-12-09 17:37:49.759981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.277 [2024-12-09 17:37:49.760140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.277 [2024-12-09 17:37:49.760150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.277 [2024-12-09 17:37:49.760157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.277 [2024-12-09 17:37:49.760164] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.277 [2024-12-09 17:37:49.772302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.277 [2024-12-09 17:37:49.772573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.277 [2024-12-09 17:37:49.772590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.277 [2024-12-09 17:37:49.772598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.277 [2024-12-09 17:37:49.772757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.277 [2024-12-09 17:37:49.772916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.277 [2024-12-09 17:37:49.772925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.277 [2024-12-09 17:37:49.772931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.277 [2024-12-09 17:37:49.772938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.277 [2024-12-09 17:37:49.785068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.277 [2024-12-09 17:37:49.785345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.277 [2024-12-09 17:37:49.785362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.277 [2024-12-09 17:37:49.785369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.277 [2024-12-09 17:37:49.785528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.277 [2024-12-09 17:37:49.785687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.277 [2024-12-09 17:37:49.785696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.277 [2024-12-09 17:37:49.785703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.277 [2024-12-09 17:37:49.785709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.277 [2024-12-09 17:37:49.797928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.277 [2024-12-09 17:37:49.798272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.277 [2024-12-09 17:37:49.798291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.277 [2024-12-09 17:37:49.798299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.277 [2024-12-09 17:37:49.798467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.277 [2024-12-09 17:37:49.798641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.277 [2024-12-09 17:37:49.798651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.277 [2024-12-09 17:37:49.798658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.277 [2024-12-09 17:37:49.798664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.277 [2024-12-09 17:37:49.810917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.277 [2024-12-09 17:37:49.811360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.277 [2024-12-09 17:37:49.811380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.277 [2024-12-09 17:37:49.811388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.277 [2024-12-09 17:37:49.811562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.277 [2024-12-09 17:37:49.811736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.277 [2024-12-09 17:37:49.811747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.277 [2024-12-09 17:37:49.811753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.277 [2024-12-09 17:37:49.811760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.537 [2024-12-09 17:37:49.823747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.537 [2024-12-09 17:37:49.824192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.537 [2024-12-09 17:37:49.824237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.537 [2024-12-09 17:37:49.824261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.537 [2024-12-09 17:37:49.824842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.537 [2024-12-09 17:37:49.825239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.537 [2024-12-09 17:37:49.825249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.537 [2024-12-09 17:37:49.825256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.537 [2024-12-09 17:37:49.825262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.537 [2024-12-09 17:37:49.836492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.537 [2024-12-09 17:37:49.836813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.537 [2024-12-09 17:37:49.836830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.537 [2024-12-09 17:37:49.836837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.537 [2024-12-09 17:37:49.836996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.537 [2024-12-09 17:37:49.837155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.537 [2024-12-09 17:37:49.837172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.537 [2024-12-09 17:37:49.837183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.537 [2024-12-09 17:37:49.837190] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.537 [2024-12-09 17:37:49.849355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.537 [2024-12-09 17:37:49.849687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.537 [2024-12-09 17:37:49.849703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.538 [2024-12-09 17:37:49.849711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.538 [2024-12-09 17:37:49.849869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.538 [2024-12-09 17:37:49.850029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.538 [2024-12-09 17:37:49.850038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.538 [2024-12-09 17:37:49.850045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.538 [2024-12-09 17:37:49.850051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.538 [2024-12-09 17:37:49.862123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.538 [2024-12-09 17:37:49.862401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.538 [2024-12-09 17:37:49.862418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.538 [2024-12-09 17:37:49.862426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.538 [2024-12-09 17:37:49.862585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.538 [2024-12-09 17:37:49.862746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.538 [2024-12-09 17:37:49.862754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.538 [2024-12-09 17:37:49.862761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.538 [2024-12-09 17:37:49.862767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.538 [2024-12-09 17:37:49.874906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.538 [2024-12-09 17:37:49.875324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.538 [2024-12-09 17:37:49.875380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.538 [2024-12-09 17:37:49.875404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.538 [2024-12-09 17:37:49.875931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.538 [2024-12-09 17:37:49.876092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.538 [2024-12-09 17:37:49.876102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.538 [2024-12-09 17:37:49.876108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.538 [2024-12-09 17:37:49.876114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.538 [2024-12-09 17:37:49.887670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.538 [2024-12-09 17:37:49.888028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.538 [2024-12-09 17:37:49.888075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.538 [2024-12-09 17:37:49.888098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.538 [2024-12-09 17:37:49.888694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.538 [2024-12-09 17:37:49.889246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.538 [2024-12-09 17:37:49.889256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.538 [2024-12-09 17:37:49.889263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.538 [2024-12-09 17:37:49.889270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.538 [2024-12-09 17:37:49.900424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.538 [2024-12-09 17:37:49.900842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.538 [2024-12-09 17:37:49.900859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.538 [2024-12-09 17:37:49.900866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.538 [2024-12-09 17:37:49.901025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.538 [2024-12-09 17:37:49.901189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.538 [2024-12-09 17:37:49.901198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.538 [2024-12-09 17:37:49.901205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.538 [2024-12-09 17:37:49.901211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.538 [2024-12-09 17:37:49.913157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.538 [2024-12-09 17:37:49.913545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.538 [2024-12-09 17:37:49.913561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.538 [2024-12-09 17:37:49.913569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.538 [2024-12-09 17:37:49.913729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.538 [2024-12-09 17:37:49.913889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.538 [2024-12-09 17:37:49.913898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.538 [2024-12-09 17:37:49.913904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.538 [2024-12-09 17:37:49.913911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.538 [2024-12-09 17:37:49.925885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.538 [2024-12-09 17:37:49.926217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.538 [2024-12-09 17:37:49.926235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.538 [2024-12-09 17:37:49.926246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.538 [2024-12-09 17:37:49.926405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.538 [2024-12-09 17:37:49.926565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.538 [2024-12-09 17:37:49.926574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.538 [2024-12-09 17:37:49.926580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.538 [2024-12-09 17:37:49.926586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.538 [2024-12-09 17:37:49.938689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.538 [2024-12-09 17:37:49.939097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.538 [2024-12-09 17:37:49.939133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.538 [2024-12-09 17:37:49.939157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.538 [2024-12-09 17:37:49.939757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.538 [2024-12-09 17:37:49.940337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.538 [2024-12-09 17:37:49.940347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.538 [2024-12-09 17:37:49.940354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.538 [2024-12-09 17:37:49.940362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.538 [2024-12-09 17:37:49.951535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.538 [2024-12-09 17:37:49.951962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.538 [2024-12-09 17:37:49.952006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.538 [2024-12-09 17:37:49.952030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.538 [2024-12-09 17:37:49.952515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.538 [2024-12-09 17:37:49.952678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.538 [2024-12-09 17:37:49.952686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.538 [2024-12-09 17:37:49.952692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.538 [2024-12-09 17:37:49.952699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.538 [2024-12-09 17:37:49.964291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.538 [2024-12-09 17:37:49.964705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.538 [2024-12-09 17:37:49.964747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.538 [2024-12-09 17:37:49.964772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.538 [2024-12-09 17:37:49.965300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.538 [2024-12-09 17:37:49.965465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.538 [2024-12-09 17:37:49.965473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.538 [2024-12-09 17:37:49.965479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.538 [2024-12-09 17:37:49.965485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.538 [2024-12-09 17:37:49.977276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.538 [2024-12-09 17:37:49.977700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.538 [2024-12-09 17:37:49.977717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.538 [2024-12-09 17:37:49.977724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.539 [2024-12-09 17:37:49.977892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.539 [2024-12-09 17:37:49.978060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.539 [2024-12-09 17:37:49.978070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.539 [2024-12-09 17:37:49.978076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.539 [2024-12-09 17:37:49.978083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.539 [2024-12-09 17:37:49.990097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.539 [2024-12-09 17:37:49.990523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.539 [2024-12-09 17:37:49.990567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.539 [2024-12-09 17:37:49.990590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.539 [2024-12-09 17:37:49.991196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.539 [2024-12-09 17:37:49.991358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.539 [2024-12-09 17:37:49.991368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.539 [2024-12-09 17:37:49.991374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.539 [2024-12-09 17:37:49.991380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.539 6098.40 IOPS, 23.82 MiB/s [2024-12-09T16:37:50.079Z] [2024-12-09 17:37:50.003100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.539 [2024-12-09 17:37:50.003444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.539 [2024-12-09 17:37:50.003463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.539 [2024-12-09 17:37:50.003471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.539 [2024-12-09 17:37:50.003645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.539 [2024-12-09 17:37:50.003818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.539 [2024-12-09 17:37:50.003829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.539 [2024-12-09 17:37:50.003842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.539 [2024-12-09 17:37:50.003849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.539 [2024-12-09 17:37:50.016797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.539 [2024-12-09 17:37:50.017112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.539 [2024-12-09 17:37:50.017142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.539 [2024-12-09 17:37:50.017152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.539 [2024-12-09 17:37:50.017351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.539 [2024-12-09 17:37:50.017528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.539 [2024-12-09 17:37:50.017538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.539 [2024-12-09 17:37:50.017547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.539 [2024-12-09 17:37:50.017554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.539 [2024-12-09 17:37:50.029680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.539 [2024-12-09 17:37:50.030078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.539 [2024-12-09 17:37:50.030123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.539 [2024-12-09 17:37:50.030147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.539 [2024-12-09 17:37:50.030614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.539 [2024-12-09 17:37:50.030785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.539 [2024-12-09 17:37:50.030795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.539 [2024-12-09 17:37:50.030802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.539 [2024-12-09 17:37:50.030810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.539 [2024-12-09 17:37:50.043287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.539 [2024-12-09 17:37:50.043730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.539 [2024-12-09 17:37:50.043748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.539 [2024-12-09 17:37:50.043757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.539 [2024-12-09 17:37:50.043931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.539 [2024-12-09 17:37:50.044104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.539 [2024-12-09 17:37:50.044114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.539 [2024-12-09 17:37:50.044121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.539 [2024-12-09 17:37:50.044127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.539 [2024-12-09 17:37:50.056362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.539 [2024-12-09 17:37:50.056796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.539 [2024-12-09 17:37:50.056814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.539 [2024-12-09 17:37:50.056822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.539 [2024-12-09 17:37:50.056996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.539 [2024-12-09 17:37:50.057176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.539 [2024-12-09 17:37:50.057187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.539 [2024-12-09 17:37:50.057194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.539 [2024-12-09 17:37:50.057201] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.539 [2024-12-09 17:37:50.069397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.539 [2024-12-09 17:37:50.069831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.539 [2024-12-09 17:37:50.069848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.539 [2024-12-09 17:37:50.069856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.539 [2024-12-09 17:37:50.070029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.539 [2024-12-09 17:37:50.070210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.539 [2024-12-09 17:37:50.070220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.539 [2024-12-09 17:37:50.070228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.539 [2024-12-09 17:37:50.070235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.800 [2024-12-09 17:37:50.082478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.800 [2024-12-09 17:37:50.082923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.800 [2024-12-09 17:37:50.082950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.800 [2024-12-09 17:37:50.082963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.800 [2024-12-09 17:37:50.083207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.800 [2024-12-09 17:37:50.083468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.800 [2024-12-09 17:37:50.083491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.800 [2024-12-09 17:37:50.083505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.800 [2024-12-09 17:37:50.083516] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.800 [2024-12-09 17:37:50.095502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.800 [2024-12-09 17:37:50.095925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.800 [2024-12-09 17:37:50.095943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.800 [2024-12-09 17:37:50.095954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.800 [2024-12-09 17:37:50.096128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.800 [2024-12-09 17:37:50.096310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.800 [2024-12-09 17:37:50.096322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.800 [2024-12-09 17:37:50.096329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.800 [2024-12-09 17:37:50.096335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.800 [2024-12-09 17:37:50.108574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.800 [2024-12-09 17:37:50.109005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.800 [2024-12-09 17:37:50.109023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.800 [2024-12-09 17:37:50.109031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.800 [2024-12-09 17:37:50.109211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.800 [2024-12-09 17:37:50.109386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.800 [2024-12-09 17:37:50.109395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.800 [2024-12-09 17:37:50.109402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.800 [2024-12-09 17:37:50.109409] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.800 [2024-12-09 17:37:50.121583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.800 [2024-12-09 17:37:50.121935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.800 [2024-12-09 17:37:50.121953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.800 [2024-12-09 17:37:50.121961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.800 [2024-12-09 17:37:50.122141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.800 [2024-12-09 17:37:50.122316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.800 [2024-12-09 17:37:50.122327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.800 [2024-12-09 17:37:50.122334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.800 [2024-12-09 17:37:50.122340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.800 [2024-12-09 17:37:50.134536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.800 [2024-12-09 17:37:50.134976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.800 [2024-12-09 17:37:50.135023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.800 [2024-12-09 17:37:50.135049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.800 [2024-12-09 17:37:50.135648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.800 [2024-12-09 17:37:50.136107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.800 [2024-12-09 17:37:50.136117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.800 [2024-12-09 17:37:50.136123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.800 [2024-12-09 17:37:50.136130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.800 [2024-12-09 17:37:50.147448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.800 [2024-12-09 17:37:50.147794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.800 [2024-12-09 17:37:50.147812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.800 [2024-12-09 17:37:50.147820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.800 [2024-12-09 17:37:50.147988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.800 [2024-12-09 17:37:50.148182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.800 [2024-12-09 17:37:50.148193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.800 [2024-12-09 17:37:50.148200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.800 [2024-12-09 17:37:50.148208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.800 [2024-12-09 17:37:50.160493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.800 [2024-12-09 17:37:50.160905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.800 [2024-12-09 17:37:50.160950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.800 [2024-12-09 17:37:50.160972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.800 [2024-12-09 17:37:50.161571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.800 [2024-12-09 17:37:50.161807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.800 [2024-12-09 17:37:50.161817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.800 [2024-12-09 17:37:50.161823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.800 [2024-12-09 17:37:50.161830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.800 [2024-12-09 17:37:50.173557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.800 [2024-12-09 17:37:50.173999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.800 [2024-12-09 17:37:50.174045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.800 [2024-12-09 17:37:50.174067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.800 [2024-12-09 17:37:50.174667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.800 [2024-12-09 17:37:50.174885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.800 [2024-12-09 17:37:50.174895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.800 [2024-12-09 17:37:50.174905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.800 [2024-12-09 17:37:50.174912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.800 [2024-12-09 17:37:50.188844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.800 [2024-12-09 17:37:50.189361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.800 [2024-12-09 17:37:50.189411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.800 [2024-12-09 17:37:50.189435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.800 [2024-12-09 17:37:50.189998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.800 [2024-12-09 17:37:50.190261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.800 [2024-12-09 17:37:50.190275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.800 [2024-12-09 17:37:50.190285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.800 [2024-12-09 17:37:50.190295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.800 [2024-12-09 17:37:50.201735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.800 [2024-12-09 17:37:50.202090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.800 [2024-12-09 17:37:50.202107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.800 [2024-12-09 17:37:50.202115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.800 [2024-12-09 17:37:50.202288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.800 [2024-12-09 17:37:50.202458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.800 [2024-12-09 17:37:50.202468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.800 [2024-12-09 17:37:50.202474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.800 [2024-12-09 17:37:50.202481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.801 [2024-12-09 17:37:50.214862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.801 [2024-12-09 17:37:50.215184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.801 [2024-12-09 17:37:50.215202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.801 [2024-12-09 17:37:50.215210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.801 [2024-12-09 17:37:50.215384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.801 [2024-12-09 17:37:50.215558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.801 [2024-12-09 17:37:50.215568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.801 [2024-12-09 17:37:50.215576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.801 [2024-12-09 17:37:50.215583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.801 [2024-12-09 17:37:50.227859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.801 [2024-12-09 17:37:50.228201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.801 [2024-12-09 17:37:50.228218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.801 [2024-12-09 17:37:50.228226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.801 [2024-12-09 17:37:50.228393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.801 [2024-12-09 17:37:50.228561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.801 [2024-12-09 17:37:50.228571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.801 [2024-12-09 17:37:50.228577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.801 [2024-12-09 17:37:50.228583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.801 [2024-12-09 17:37:50.240878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.801 [2024-12-09 17:37:50.241302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.801 [2024-12-09 17:37:50.241320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.801 [2024-12-09 17:37:50.241329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.801 [2024-12-09 17:37:50.241497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.801 [2024-12-09 17:37:50.241666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.801 [2024-12-09 17:37:50.241675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.801 [2024-12-09 17:37:50.241682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.801 [2024-12-09 17:37:50.241689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.801 [2024-12-09 17:37:50.253872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.801 [2024-12-09 17:37:50.254319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.801 [2024-12-09 17:37:50.254337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.801 [2024-12-09 17:37:50.254345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.801 [2024-12-09 17:37:50.254519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.801 [2024-12-09 17:37:50.254694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.801 [2024-12-09 17:37:50.254703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.801 [2024-12-09 17:37:50.254710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.801 [2024-12-09 17:37:50.254717] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.801 [2024-12-09 17:37:50.266917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.801 [2024-12-09 17:37:50.267325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.801 [2024-12-09 17:37:50.267344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.801 [2024-12-09 17:37:50.267355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.801 [2024-12-09 17:37:50.267553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.801 [2024-12-09 17:37:50.267727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.801 [2024-12-09 17:37:50.267737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.801 [2024-12-09 17:37:50.267744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.801 [2024-12-09 17:37:50.267751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.801 [2024-12-09 17:37:50.280010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.801 [2024-12-09 17:37:50.280415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.801 [2024-12-09 17:37:50.280433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.801 [2024-12-09 17:37:50.280441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.801 [2024-12-09 17:37:50.280609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.801 [2024-12-09 17:37:50.280777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.801 [2024-12-09 17:37:50.280787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.801 [2024-12-09 17:37:50.280794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.801 [2024-12-09 17:37:50.280801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.801 [2024-12-09 17:37:50.292971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.801 [2024-12-09 17:37:50.293399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.801 [2024-12-09 17:37:50.293417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.801 [2024-12-09 17:37:50.293425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.801 [2024-12-09 17:37:50.293593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.801 [2024-12-09 17:37:50.293761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.801 [2024-12-09 17:37:50.293771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.801 [2024-12-09 17:37:50.293777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.801 [2024-12-09 17:37:50.293784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.801 [2024-12-09 17:37:50.305944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.801 [2024-12-09 17:37:50.306374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.801 [2024-12-09 17:37:50.306391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.801 [2024-12-09 17:37:50.306399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.801 [2024-12-09 17:37:50.306567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.801 [2024-12-09 17:37:50.306739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.801 [2024-12-09 17:37:50.306748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.801 [2024-12-09 17:37:50.306757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.801 [2024-12-09 17:37:50.306764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.801 [2024-12-09 17:37:50.318834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.801 [2024-12-09 17:37:50.319183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.801 [2024-12-09 17:37:50.319200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.801 [2024-12-09 17:37:50.319207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.801 [2024-12-09 17:37:50.319376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.801 [2024-12-09 17:37:50.319544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.801 [2024-12-09 17:37:50.319553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.801 [2024-12-09 17:37:50.319559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.801 [2024-12-09 17:37:50.319566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.801 [2024-12-09 17:37:50.331792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.801 [2024-12-09 17:37:50.332194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.801 [2024-12-09 17:37:50.332212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:23.801 [2024-12-09 17:37:50.332220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:23.801 [2024-12-09 17:37:50.332388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:23.801 [2024-12-09 17:37:50.332556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.801 [2024-12-09 17:37:50.332565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.801 [2024-12-09 17:37:50.332572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.801 [2024-12-09 17:37:50.332578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.061 [2024-12-09 17:37:50.344793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.061 [2024-12-09 17:37:50.345200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.062 [2024-12-09 17:37:50.345218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.062 [2024-12-09 17:37:50.345225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.062 [2024-12-09 17:37:50.345408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.062 [2024-12-09 17:37:50.345577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.062 [2024-12-09 17:37:50.345586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.062 [2024-12-09 17:37:50.345596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.062 [2024-12-09 17:37:50.345614] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.062 [2024-12-09 17:37:50.357819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.062 [2024-12-09 17:37:50.358223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.062 [2024-12-09 17:37:50.358241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.062 [2024-12-09 17:37:50.358248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.062 [2024-12-09 17:37:50.358417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.062 [2024-12-09 17:37:50.358586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.062 [2024-12-09 17:37:50.358596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.062 [2024-12-09 17:37:50.358603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.062 [2024-12-09 17:37:50.358609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.062 [2024-12-09 17:37:50.370847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.062 [2024-12-09 17:37:50.371272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.062 [2024-12-09 17:37:50.371290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.062 [2024-12-09 17:37:50.371298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.062 [2024-12-09 17:37:50.371466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.062 [2024-12-09 17:37:50.371635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.062 [2024-12-09 17:37:50.371644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.062 [2024-12-09 17:37:50.371651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.062 [2024-12-09 17:37:50.371658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.062 [2024-12-09 17:37:50.383841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.062 [2024-12-09 17:37:50.384216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.062 [2024-12-09 17:37:50.384263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.062 [2024-12-09 17:37:50.384287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.062 [2024-12-09 17:37:50.384869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.062 [2024-12-09 17:37:50.385466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.062 [2024-12-09 17:37:50.385477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.062 [2024-12-09 17:37:50.385483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.062 [2024-12-09 17:37:50.385491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.062 [2024-12-09 17:37:50.396904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.062 [2024-12-09 17:37:50.397313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.062 [2024-12-09 17:37:50.397330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.062 [2024-12-09 17:37:50.397338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.062 [2024-12-09 17:37:50.397506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.062 [2024-12-09 17:37:50.397674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.062 [2024-12-09 17:37:50.397683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.062 [2024-12-09 17:37:50.397690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.062 [2024-12-09 17:37:50.397697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.062 [2024-12-09 17:37:50.409786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.062 [2024-12-09 17:37:50.410199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.062 [2024-12-09 17:37:50.410245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.062 [2024-12-09 17:37:50.410268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.062 [2024-12-09 17:37:50.410722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.062 [2024-12-09 17:37:50.410892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.062 [2024-12-09 17:37:50.410902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.062 [2024-12-09 17:37:50.410908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.062 [2024-12-09 17:37:50.410915] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.062 [2024-12-09 17:37:50.422728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.062 [2024-12-09 17:37:50.423104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.062 [2024-12-09 17:37:50.423122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.062 [2024-12-09 17:37:50.423129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.062 [2024-12-09 17:37:50.423303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.062 [2024-12-09 17:37:50.423472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.062 [2024-12-09 17:37:50.423481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.062 [2024-12-09 17:37:50.423488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.062 [2024-12-09 17:37:50.423494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.062 [2024-12-09 17:37:50.435714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.062 [2024-12-09 17:37:50.436139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.062 [2024-12-09 17:37:50.436156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.062 [2024-12-09 17:37:50.436172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.062 [2024-12-09 17:37:50.436360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.062 [2024-12-09 17:37:50.436534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.062 [2024-12-09 17:37:50.436543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.062 [2024-12-09 17:37:50.436550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.062 [2024-12-09 17:37:50.436557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.062 [2024-12-09 17:37:50.448773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.062 [2024-12-09 17:37:50.449113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.062 [2024-12-09 17:37:50.449130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.062 [2024-12-09 17:37:50.449137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.062 [2024-12-09 17:37:50.449311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.062 [2024-12-09 17:37:50.449480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.062 [2024-12-09 17:37:50.449490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.062 [2024-12-09 17:37:50.449496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.062 [2024-12-09 17:37:50.449502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.062 [2024-12-09 17:37:50.461761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.062 [2024-12-09 17:37:50.462192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.062 [2024-12-09 17:37:50.462237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.062 [2024-12-09 17:37:50.462260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.062 [2024-12-09 17:37:50.462760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.062 [2024-12-09 17:37:50.462929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.062 [2024-12-09 17:37:50.462938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.062 [2024-12-09 17:37:50.462945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.062 [2024-12-09 17:37:50.462951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.062 [2024-12-09 17:37:50.474702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.062 [2024-12-09 17:37:50.475031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.063 [2024-12-09 17:37:50.475048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.063 [2024-12-09 17:37:50.475055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.063 [2024-12-09 17:37:50.475228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.063 [2024-12-09 17:37:50.475398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.063 [2024-12-09 17:37:50.475411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.063 [2024-12-09 17:37:50.475418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.063 [2024-12-09 17:37:50.475424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.063 [2024-12-09 17:37:50.487587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.063 [2024-12-09 17:37:50.487989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.063 [2024-12-09 17:37:50.488006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.063 [2024-12-09 17:37:50.488013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.063 [2024-12-09 17:37:50.488186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.063 [2024-12-09 17:37:50.488355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.063 [2024-12-09 17:37:50.488365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.063 [2024-12-09 17:37:50.488372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.063 [2024-12-09 17:37:50.488378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.063 [2024-12-09 17:37:50.500482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.063 [2024-12-09 17:37:50.500867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.063 [2024-12-09 17:37:50.500919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.063 [2024-12-09 17:37:50.500943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.063 [2024-12-09 17:37:50.501483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.063 [2024-12-09 17:37:50.501654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.063 [2024-12-09 17:37:50.501663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.063 [2024-12-09 17:37:50.501669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.063 [2024-12-09 17:37:50.501676] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.063 [2024-12-09 17:37:50.513492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.063 [2024-12-09 17:37:50.513851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.063 [2024-12-09 17:37:50.513869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.063 [2024-12-09 17:37:50.513878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.063 [2024-12-09 17:37:50.514051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.063 [2024-12-09 17:37:50.514231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.063 [2024-12-09 17:37:50.514242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.063 [2024-12-09 17:37:50.514249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.063 [2024-12-09 17:37:50.514260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.063 [2024-12-09 17:37:50.526471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.063 [2024-12-09 17:37:50.526864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.063 [2024-12-09 17:37:50.526909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.063 [2024-12-09 17:37:50.526933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.063 [2024-12-09 17:37:50.527529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.063 [2024-12-09 17:37:50.527970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.063 [2024-12-09 17:37:50.527980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.063 [2024-12-09 17:37:50.527987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.063 [2024-12-09 17:37:50.527993] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.063 [2024-12-09 17:37:50.539550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.063 [2024-12-09 17:37:50.539919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.063 [2024-12-09 17:37:50.539964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.063 [2024-12-09 17:37:50.539987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.063 [2024-12-09 17:37:50.540587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.063 [2024-12-09 17:37:50.541089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.063 [2024-12-09 17:37:50.541099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.063 [2024-12-09 17:37:50.541105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.063 [2024-12-09 17:37:50.541112] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.063 [2024-12-09 17:37:50.552603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.063 [2024-12-09 17:37:50.553034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.063 [2024-12-09 17:37:50.553077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.063 [2024-12-09 17:37:50.553100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.063 [2024-12-09 17:37:50.553699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.063 [2024-12-09 17:37:50.553888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.063 [2024-12-09 17:37:50.553897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.063 [2024-12-09 17:37:50.553904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.063 [2024-12-09 17:37:50.553911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.063 [2024-12-09 17:37:50.565576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.063 [2024-12-09 17:37:50.566001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.063 [2024-12-09 17:37:50.566017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.063 [2024-12-09 17:37:50.566025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.063 [2024-12-09 17:37:50.566602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.063 [2024-12-09 17:37:50.566777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.063 [2024-12-09 17:37:50.566786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.063 [2024-12-09 17:37:50.566793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.063 [2024-12-09 17:37:50.566800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.063 [2024-12-09 17:37:50.578675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.063 [2024-12-09 17:37:50.579109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.063 [2024-12-09 17:37:50.579126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.063 [2024-12-09 17:37:50.579134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.063 [2024-12-09 17:37:50.579312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.063 [2024-12-09 17:37:50.579486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.063 [2024-12-09 17:37:50.579495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.063 [2024-12-09 17:37:50.579502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.063 [2024-12-09 17:37:50.579509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.063 [2024-12-09 17:37:50.591715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.063 [2024-12-09 17:37:50.592149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.063 [2024-12-09 17:37:50.592207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.063 [2024-12-09 17:37:50.592231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.063 [2024-12-09 17:37:50.592802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.063 [2024-12-09 17:37:50.592977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.063 [2024-12-09 17:37:50.592987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.063 [2024-12-09 17:37:50.592995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.063 [2024-12-09 17:37:50.593002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.324 [2024-12-09 17:37:50.604675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.324 [2024-12-09 17:37:50.605082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.324 [2024-12-09 17:37:50.605099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.324 [2024-12-09 17:37:50.605107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.324 [2024-12-09 17:37:50.605290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.324 [2024-12-09 17:37:50.605463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.324 [2024-12-09 17:37:50.605473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.324 [2024-12-09 17:37:50.605480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.324 [2024-12-09 17:37:50.605487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.324 [2024-12-09 17:37:50.617608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.324 [2024-12-09 17:37:50.617963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.324 [2024-12-09 17:37:50.617980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.324 [2024-12-09 17:37:50.617987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.324 [2024-12-09 17:37:50.618154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.324 [2024-12-09 17:37:50.618338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.324 [2024-12-09 17:37:50.618348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.324 [2024-12-09 17:37:50.618355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.324 [2024-12-09 17:37:50.618361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.324 [2024-12-09 17:37:50.630628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.324 [2024-12-09 17:37:50.630976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.324 [2024-12-09 17:37:50.630993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.324 [2024-12-09 17:37:50.631001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.324 [2024-12-09 17:37:50.631175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.324 [2024-12-09 17:37:50.631345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.324 [2024-12-09 17:37:50.631355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.324 [2024-12-09 17:37:50.631361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.324 [2024-12-09 17:37:50.631368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.325 [2024-12-09 17:37:50.643529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.325 [2024-12-09 17:37:50.643951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.325 [2024-12-09 17:37:50.643968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.325 [2024-12-09 17:37:50.643976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.325 [2024-12-09 17:37:50.644145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.325 [2024-12-09 17:37:50.644338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.325 [2024-12-09 17:37:50.644353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.325 [2024-12-09 17:37:50.644360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.325 [2024-12-09 17:37:50.644366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.325 [2024-12-09 17:37:50.656473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.325 [2024-12-09 17:37:50.656826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.325 [2024-12-09 17:37:50.656844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.325 [2024-12-09 17:37:50.656851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.325 [2024-12-09 17:37:50.657019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.325 [2024-12-09 17:37:50.657193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.325 [2024-12-09 17:37:50.657203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.325 [2024-12-09 17:37:50.657210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.325 [2024-12-09 17:37:50.657217] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.325 [2024-12-09 17:37:50.669503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.325 [2024-12-09 17:37:50.669817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.325 [2024-12-09 17:37:50.669835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.325 [2024-12-09 17:37:50.669843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.325 [2024-12-09 17:37:50.670010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.325 [2024-12-09 17:37:50.670187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.325 [2024-12-09 17:37:50.670197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.325 [2024-12-09 17:37:50.670204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.325 [2024-12-09 17:37:50.670211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2054765 Killed "${NVMF_APP[@]}" "$@" 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.325 [2024-12-09 17:37:50.682472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.325 [2024-12-09 17:37:50.682879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.325 [2024-12-09 17:37:50.682895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.325 [2024-12-09 17:37:50.682903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.325 [2024-12-09 17:37:50.683075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.325 [2024-12-09 17:37:50.683249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.325 [2024-12-09 17:37:50.683260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.325 [2024-12-09 17:37:50.683267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.325 [2024-12-09 17:37:50.683273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2056039 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2056039 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2056039 ']' 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:24.325 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.325 [2024-12-09 17:37:50.695698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.325 [2024-12-09 17:37:50.696053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.325 [2024-12-09 17:37:50.696071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.325 [2024-12-09 17:37:50.696079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.325 [2024-12-09 17:37:50.696258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.325 [2024-12-09 17:37:50.696432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.325 [2024-12-09 17:37:50.696441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.325 [2024-12-09 17:37:50.696447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.325 [2024-12-09 17:37:50.696454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.325 [2024-12-09 17:37:50.708819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.325 [2024-12-09 17:37:50.709259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.325 [2024-12-09 17:37:50.709278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.325 [2024-12-09 17:37:50.709286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.325 [2024-12-09 17:37:50.709460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.325 [2024-12-09 17:37:50.709635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.325 [2024-12-09 17:37:50.709644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.325 [2024-12-09 17:37:50.709656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.325 [2024-12-09 17:37:50.709663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.325 [2024-12-09 17:37:50.721885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.325 [2024-12-09 17:37:50.722158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.325 [2024-12-09 17:37:50.722181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.325 [2024-12-09 17:37:50.722189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.325 [2024-12-09 17:37:50.722361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.325 [2024-12-09 17:37:50.722535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.325 [2024-12-09 17:37:50.722545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.325 [2024-12-09 17:37:50.722551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.325 [2024-12-09 17:37:50.722558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.325 [2024-12-09 17:37:50.734935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.325 [2024-12-09 17:37:50.735384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.325 [2024-12-09 17:37:50.735402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.325 [2024-12-09 17:37:50.735410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.325 [2024-12-09 17:37:50.735583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.325 [2024-12-09 17:37:50.735706] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:27:24.325 [2024-12-09 17:37:50.735747] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.325 [2024-12-09 17:37:50.735756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.325 [2024-12-09 17:37:50.735765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.325 [2024-12-09 17:37:50.735772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.325 [2024-12-09 17:37:50.735778] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.325 [2024-12-09 17:37:50.747940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.325 [2024-12-09 17:37:50.748358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.325 [2024-12-09 17:37:50.748376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.326 [2024-12-09 17:37:50.748385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.326 [2024-12-09 17:37:50.748558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.326 [2024-12-09 17:37:50.748733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.326 [2024-12-09 17:37:50.748743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.326 [2024-12-09 17:37:50.748753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.326 [2024-12-09 17:37:50.748760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.326 [2024-12-09 17:37:50.760990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.326 [2024-12-09 17:37:50.761362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.326 [2024-12-09 17:37:50.761380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.326 [2024-12-09 17:37:50.761389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.326 [2024-12-09 17:37:50.761562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.326 [2024-12-09 17:37:50.761736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.326 [2024-12-09 17:37:50.761745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.326 [2024-12-09 17:37:50.761755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.326 [2024-12-09 17:37:50.761762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.326 [2024-12-09 17:37:50.773968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.326 [2024-12-09 17:37:50.774333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.326 [2024-12-09 17:37:50.774351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.326 [2024-12-09 17:37:50.774359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.326 [2024-12-09 17:37:50.774532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.326 [2024-12-09 17:37:50.774705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.326 [2024-12-09 17:37:50.774715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.326 [2024-12-09 17:37:50.774722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.326 [2024-12-09 17:37:50.774729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.326 [2024-12-09 17:37:50.786944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.326 [2024-12-09 17:37:50.787376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.326 [2024-12-09 17:37:50.787395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.326 [2024-12-09 17:37:50.787403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.326 [2024-12-09 17:37:50.787576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.326 [2024-12-09 17:37:50.787750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.326 [2024-12-09 17:37:50.787760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.326 [2024-12-09 17:37:50.787767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.326 [2024-12-09 17:37:50.787774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.326 [2024-12-09 17:37:50.799903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.326 [2024-12-09 17:37:50.800260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.326 [2024-12-09 17:37:50.800278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.326 [2024-12-09 17:37:50.800286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.326 [2024-12-09 17:37:50.800454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.326 [2024-12-09 17:37:50.800623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.326 [2024-12-09 17:37:50.800632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.326 [2024-12-09 17:37:50.800639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.326 [2024-12-09 17:37:50.800645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.326 [2024-12-09 17:37:50.812816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.326 [2024-12-09 17:37:50.813238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.326 [2024-12-09 17:37:50.813255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.326 [2024-12-09 17:37:50.813263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.326 [2024-12-09 17:37:50.813431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.326 [2024-12-09 17:37:50.813599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.326 [2024-12-09 17:37:50.813609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.326 [2024-12-09 17:37:50.813615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.326 [2024-12-09 17:37:50.813622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.326 [2024-12-09 17:37:50.815875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:24.326 [2024-12-09 17:37:50.825807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.326 [2024-12-09 17:37:50.826263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.326 [2024-12-09 17:37:50.826285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.326 [2024-12-09 17:37:50.826294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.326 [2024-12-09 17:37:50.826465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.326 [2024-12-09 17:37:50.826636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.326 [2024-12-09 17:37:50.826646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.326 [2024-12-09 17:37:50.826654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.326 [2024-12-09 17:37:50.826662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.326 [2024-12-09 17:37:50.838733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.326 [2024-12-09 17:37:50.839156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.326 [2024-12-09 17:37:50.839183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.326 [2024-12-09 17:37:50.839191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.326 [2024-12-09 17:37:50.839360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.326 [2024-12-09 17:37:50.839530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.326 [2024-12-09 17:37:50.839539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.326 [2024-12-09 17:37:50.839546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.326 [2024-12-09 17:37:50.839552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.326 [2024-12-09 17:37:50.851624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.326 [2024-12-09 17:37:50.852028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.326 [2024-12-09 17:37:50.852045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.326 [2024-12-09 17:37:50.852052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.326 [2024-12-09 17:37:50.852227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.326 [2024-12-09 17:37:50.852397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.326 [2024-12-09 17:37:50.852406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.326 [2024-12-09 17:37:50.852413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.326 [2024-12-09 17:37:50.852420] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.326 [2024-12-09 17:37:50.855921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.326 [2024-12-09 17:37:50.855948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.326 [2024-12-09 17:37:50.855955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.326 [2024-12-09 17:37:50.855960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.326 [2024-12-09 17:37:50.855965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.326 [2024-12-09 17:37:50.857216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.326 [2024-12-09 17:37:50.857327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.326 [2024-12-09 17:37:50.857328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.587 [2024-12-09 17:37:50.864670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.587 [2024-12-09 17:37:50.865120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-12-09 17:37:50.865140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.587 [2024-12-09 17:37:50.865149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.587 [2024-12-09 17:37:50.865331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.587 [2024-12-09 17:37:50.865509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.587 [2024-12-09 17:37:50.865519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.587 [2024-12-09 17:37:50.865533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.587 [2024-12-09 17:37:50.865541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.587 [2024-12-09 17:37:50.877754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.587 [2024-12-09 17:37:50.878208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-12-09 17:37:50.878230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.587 [2024-12-09 17:37:50.878239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.587 [2024-12-09 17:37:50.878415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.587 [2024-12-09 17:37:50.878592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.587 [2024-12-09 17:37:50.878602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.587 [2024-12-09 17:37:50.878611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.587 [2024-12-09 17:37:50.878619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.587 [2024-12-09 17:37:50.890843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.587 [2024-12-09 17:37:50.891273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-12-09 17:37:50.891295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.587 [2024-12-09 17:37:50.891305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.587 [2024-12-09 17:37:50.891482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.587 [2024-12-09 17:37:50.891660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.587 [2024-12-09 17:37:50.891670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.587 [2024-12-09 17:37:50.891678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.587 [2024-12-09 17:37:50.891686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.587 [2024-12-09 17:37:50.903901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.587 [2024-12-09 17:37:50.904345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-12-09 17:37:50.904366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.587 [2024-12-09 17:37:50.904375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.587 [2024-12-09 17:37:50.904552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.587 [2024-12-09 17:37:50.904729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.587 [2024-12-09 17:37:50.904738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.587 [2024-12-09 17:37:50.904746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.587 [2024-12-09 17:37:50.904754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.587 [2024-12-09 17:37:50.916983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.587 [2024-12-09 17:37:50.917294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-12-09 17:37:50.917316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.587 [2024-12-09 17:37:50.917325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.587 [2024-12-09 17:37:50.917501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.588 [2024-12-09 17:37:50.917678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.588 [2024-12-09 17:37:50.917690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.588 [2024-12-09 17:37:50.917699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.588 [2024-12-09 17:37:50.917707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.588 [2024-12-09 17:37:50.930083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.588 [2024-12-09 17:37:50.930414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-12-09 17:37:50.930433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.588 [2024-12-09 17:37:50.930441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.588 [2024-12-09 17:37:50.930615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.588 [2024-12-09 17:37:50.930791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.588 [2024-12-09 17:37:50.930801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.588 [2024-12-09 17:37:50.930810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.588 [2024-12-09 17:37:50.930817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.588 [2024-12-09 17:37:50.943203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.588 [2024-12-09 17:37:50.943582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-12-09 17:37:50.943600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.588 [2024-12-09 17:37:50.943608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.588 [2024-12-09 17:37:50.943781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.588 [2024-12-09 17:37:50.943956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.588 [2024-12-09 17:37:50.943966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.588 [2024-12-09 17:37:50.943973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.588 [2024-12-09 17:37:50.943981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.588 [2024-12-09 17:37:50.956233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.588 [2024-12-09 17:37:50.956626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-12-09 17:37:50.956645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.588 [2024-12-09 17:37:50.956654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.588 [2024-12-09 17:37:50.956828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.588 [2024-12-09 17:37:50.957003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.588 [2024-12-09 17:37:50.957013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.588 [2024-12-09 17:37:50.957019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.588 [2024-12-09 17:37:50.957027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.588 [2024-12-09 17:37:50.969240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.588 [2024-12-09 17:37:50.969533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-12-09 17:37:50.969551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.588 [2024-12-09 17:37:50.969559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.588 [2024-12-09 17:37:50.969732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.588 [2024-12-09 17:37:50.969908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.588 [2024-12-09 17:37:50.969917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.588 [2024-12-09 17:37:50.969924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.588 [2024-12-09 17:37:50.969931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.588 [2024-12-09 17:37:50.982309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.588 [2024-12-09 17:37:50.982669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-12-09 17:37:50.982690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.588 [2024-12-09 17:37:50.982698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.588 [2024-12-09 17:37:50.982871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.588 [2024-12-09 17:37:50.983047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.588 [2024-12-09 17:37:50.983057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.588 [2024-12-09 17:37:50.983064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.588 [2024-12-09 17:37:50.983071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.588 [2024-12-09 17:37:50.993001] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.588 [2024-12-09 17:37:50.995303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.588 [2024-12-09 17:37:50.995589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-12-09 17:37:50.995608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.588 [2024-12-09 17:37:50.995616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.588 [2024-12-09 17:37:50.995789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.588 [2024-12-09 17:37:50.995965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.588 [2024-12-09 17:37:50.995975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.588 [2024-12-09 17:37:50.995981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.588 [2024-12-09 17:37:50.995988] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.588 17:37:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.588 5082.00 IOPS, 19.85 MiB/s [2024-12-09T16:37:51.128Z] [2024-12-09 17:37:51.008314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.588 [2024-12-09 17:37:51.008654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-12-09 17:37:51.008672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.588 [2024-12-09 17:37:51.008680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.588 [2024-12-09 17:37:51.008854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.588 [2024-12-09 17:37:51.009030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.588 [2024-12-09 17:37:51.009039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.588 [2024-12-09 17:37:51.009047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.588 [2024-12-09 17:37:51.009054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.588 [2024-12-09 17:37:51.021281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.588 [2024-12-09 17:37:51.021565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-12-09 17:37:51.021583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.588 [2024-12-09 17:37:51.021592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.588 [2024-12-09 17:37:51.021765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.588 [2024-12-09 17:37:51.021942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.588 [2024-12-09 17:37:51.021956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.588 [2024-12-09 17:37:51.021963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.588 [2024-12-09 17:37:51.021970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.588 Malloc0 00:27:24.588 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.588 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:24.588 [2024-12-09 17:37:51.034358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.588 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.588 [2024-12-09 17:37:51.034694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-12-09 17:37:51.034713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.589 [2024-12-09 17:37:51.034721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.589 [2024-12-09 17:37:51.034894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.589 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.589 [2024-12-09 17:37:51.035069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.589 [2024-12-09 17:37:51.035079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.589 [2024-12-09 17:37:51.035086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.589 [2024-12-09 17:37:51.035093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.589 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.589 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:24.589 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.589 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.589 [2024-12-09 17:37:51.047476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.589 [2024-12-09 17:37:51.047763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-12-09 17:37:51.047780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36760 with addr=10.0.0.2, port=4420 00:27:24.589 [2024-12-09 17:37:51.047788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36760 is same with the state(6) to be set 00:27:24.589 [2024-12-09 17:37:51.047961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36760 (9): Bad file descriptor 00:27:24.589 [2024-12-09 17:37:51.048134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.589 [2024-12-09 17:37:51.048144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.589 [2024-12-09 17:37:51.048151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.589 [2024-12-09 17:37:51.048157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.589 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.589 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.589 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.589 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.589 [2024-12-09 17:37:51.057062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.589 [2024-12-09 17:37:51.060538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.589 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.589 17:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2055133 00:27:24.589 [2024-12-09 17:37:51.083788] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:26.903 5845.14 IOPS, 22.83 MiB/s [2024-12-09T16:37:54.011Z] 6553.62 IOPS, 25.60 MiB/s [2024-12-09T16:37:55.387Z] 7088.44 IOPS, 27.69 MiB/s [2024-12-09T16:37:56.323Z] 7504.40 IOPS, 29.31 MiB/s [2024-12-09T16:37:57.259Z] 7864.18 IOPS, 30.72 MiB/s [2024-12-09T16:37:58.197Z] 8174.50 IOPS, 31.93 MiB/s [2024-12-09T16:37:59.133Z] 8423.92 IOPS, 32.91 MiB/s [2024-12-09T16:38:00.070Z] 8632.00 IOPS, 33.72 MiB/s 00:27:33.530 Latency(us) 00:27:33.530 [2024-12-09T16:38:00.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.530 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:33.530 Verification LBA range: start 0x0 length 0x4000 00:27:33.530 Nvme1n1 : 15.01 8820.87 34.46 10983.22 0.00 6443.50 442.76 16976.94 00:27:33.530 [2024-12-09T16:38:00.070Z] =================================================================================================================== 00:27:33.530 [2024-12-09T16:38:00.070Z] Total : 8820.87 34.46 10983.22 0.00 6443.50 442.76 16976.94 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:33.789 rmmod nvme_tcp 00:27:33.789 rmmod nvme_fabrics 00:27:33.789 rmmod nvme_keyring 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2056039 ']' 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2056039 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2056039 ']' 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2056039 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2056039 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2056039' 00:27:33.789 killing process with pid 2056039 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2056039 00:27:33.789 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2056039 00:27:34.049 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:34.049 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:34.049 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:34.049 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:34.049 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:34.049 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:34.049 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:34.049 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:34.049 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:34.049 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.049 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.049 17:38:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:36.583 00:27:36.583 real 0m25.984s 00:27:36.583 user 1m0.440s 00:27:36.583 sys 0m6.861s 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.583 ************************************ 00:27:36.583 END TEST nvmf_bdevperf 00:27:36.583 ************************************ 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.583 ************************************ 00:27:36.583 START TEST nvmf_target_disconnect 00:27:36.583 ************************************ 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:36.583 * Looking for test storage... 00:27:36.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:36.583 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:36.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.584 --rc genhtml_branch_coverage=1 00:27:36.584 --rc genhtml_function_coverage=1 00:27:36.584 --rc genhtml_legend=1 00:27:36.584 --rc geninfo_all_blocks=1 00:27:36.584 --rc geninfo_unexecuted_blocks=1 00:27:36.584 00:27:36.584 ' 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:36.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.584 --rc genhtml_branch_coverage=1 00:27:36.584 --rc genhtml_function_coverage=1 00:27:36.584 --rc genhtml_legend=1 00:27:36.584 --rc geninfo_all_blocks=1 00:27:36.584 --rc geninfo_unexecuted_blocks=1 00:27:36.584 00:27:36.584 ' 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:36.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.584 --rc genhtml_branch_coverage=1 00:27:36.584 --rc genhtml_function_coverage=1 00:27:36.584 --rc genhtml_legend=1 00:27:36.584 --rc geninfo_all_blocks=1 00:27:36.584 --rc geninfo_unexecuted_blocks=1 00:27:36.584 00:27:36.584 ' 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:36.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.584 --rc genhtml_branch_coverage=1 00:27:36.584 --rc genhtml_function_coverage=1 00:27:36.584 --rc genhtml_legend=1 00:27:36.584 --rc geninfo_all_blocks=1 00:27:36.584 --rc geninfo_unexecuted_blocks=1 00:27:36.584 00:27:36.584 ' 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:36.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:36.584 17:38:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:41.944 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:41.944 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:41.944 Found net devices under 0000:af:00.0: cvl_0_0 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:41.944 Found net devices under 0000:af:00.1: cvl_0_1 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:41.944 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.945 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:42.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:27:42.204 00:27:42.204 --- 10.0.0.2 ping statistics --- 00:27:42.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.204 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:42.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:27:42.204 00:27:42.204 --- 10.0.0.1 ping statistics --- 00:27:42.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.204 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:42.204 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:42.464 ************************************ 00:27:42.464 START TEST nvmf_target_disconnect_tc1 00:27:42.464 ************************************ 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:42.464 [2024-12-09 17:38:08.881729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.464 [2024-12-09 17:38:08.881786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad50b0 with addr=10.0.0.2, port=4420 00:27:42.464 [2024-12-09 17:38:08.881823] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:42.464 [2024-12-09 17:38:08.881834] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:42.464 [2024-12-09 17:38:08.881841] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:42.464 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:42.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:42.464 Initializing NVMe Controllers 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:42.464 00:27:42.464 real 0m0.123s 00:27:42.464 user 0m0.047s 00:27:42.464 sys 0m0.076s 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:42.464 ************************************ 00:27:42.464 END TEST nvmf_target_disconnect_tc1 00:27:42.464 ************************************ 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:42.464 ************************************ 00:27:42.464 START TEST nvmf_target_disconnect_tc2 00:27:42.464 ************************************ 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2061097 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2061097 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2061097 ']' 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.464 17:38:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.723 [2024-12-09 17:38:09.027654] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:27:42.723 [2024-12-09 17:38:09.027699] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.723 [2024-12-09 17:38:09.105182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:42.723 [2024-12-09 17:38:09.147036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.723 [2024-12-09 17:38:09.147074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.723 [2024-12-09 17:38:09.147081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.723 [2024-12-09 17:38:09.147087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.723 [2024-12-09 17:38:09.147092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.723 [2024-12-09 17:38:09.148538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:42.723 [2024-12-09 17:38:09.148646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:42.723 [2024-12-09 17:38:09.148769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:42.723 [2024-12-09 17:38:09.148770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:42.723 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.723 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:42.723 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:42.723 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:42.723 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.981 Malloc0 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.981 [2024-12-09 17:38:09.326430] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.981 [2024-12-09 17:38:09.358682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:42.981 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.982 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.982 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.982 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2061193 00:27:42.982 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:42.982 17:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:44.887 17:38:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2061097 00:27:44.887 17:38:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 [2024-12-09 17:38:11.386653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 [2024-12-09 17:38:11.386852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Read completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.887 Write completed with error (sct=0, sc=8) 00:27:44.887 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Write completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 [2024-12-09 17:38:11.387042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Write completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Write completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Write completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Write completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Write completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Write completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Write completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Write completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 Read completed with error (sct=0, sc=8) 00:27:44.888 starting I/O failed 00:27:44.888 [2024-12-09 17:38:11.387243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.888 [2024-12-09 17:38:11.387428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.387454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.387615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.387625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.387793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.387803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.387884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.387893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.388021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.388031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.388178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.388189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.388348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.388358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.388520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.388529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.388651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.388691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.388824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.388855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.389048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.389079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.389263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.389273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.389336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.389345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.389419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.389428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.389643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.389653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.389804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.389814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.390083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.390114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.390241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.390273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.390408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.390439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.390664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.390695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.390971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.391002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.391125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.391155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.391354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.391365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.391438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.391447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.391569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.391578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.391639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.391648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.888 [2024-12-09 17:38:11.391777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-12-09 17:38:11.391787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.888 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.391864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.391873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.392094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.392124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.392342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.392374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.392542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.392573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.392691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.392701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.392938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.392947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.393008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.393017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.393096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.393105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.393195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.393205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.393351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.393361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.393442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.393451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.393610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.393620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.393746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.393756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.393883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.393893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.393950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.393959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.394015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.394024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.394146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.394155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.394313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.394333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.394418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.394439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.394624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.394646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.394729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.394739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.394810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.394823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.394899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.394909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.394979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.394988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.395133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.395142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.395312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.395323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.395413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.395423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.395500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.395509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.395655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.395665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.395795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.395805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.395869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.395878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.395934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.395943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.396003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.396012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.396072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.396081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.396136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.396145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.396322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.396332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.396402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.396411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.396475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.396484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.889 [2024-12-09 17:38:11.396620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-12-09 17:38:11.396629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.889 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.396689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.396698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.396892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.396901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.396972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.396981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.397047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.397056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.397185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.397195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.397268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.397277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.397348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.397357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.397418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.397440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.397536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.397548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.397690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.397702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.397781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.397793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.397859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.397872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.398020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.398033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.398120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.398132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.398205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.398218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.398308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.398319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.398467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.398480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.398550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.398562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.398630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.398642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.398701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.398713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.398777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.398789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.398853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.398864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.398996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.399011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.399207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.399222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.399283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.399295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.399371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.399383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.399447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.399459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.399536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.399549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.399635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.399648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.399721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.399733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.399814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.399826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.399955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.399967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.400037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.400049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.400200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.400214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.400356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.400369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.400441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.400453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.400620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.400633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.400702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.400714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.890 [2024-12-09 17:38:11.400851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.890 [2024-12-09 17:38:11.400864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.890 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.400933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.400946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.401007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.401019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.401095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.401107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.401272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.401289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.401351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.401363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.401505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.401519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.401594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.401606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.401749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.401762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.401897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.401910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.402070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.402082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.402155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.402176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.402313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.402328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.402398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.402411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.402613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.402627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.402697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.402710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.402839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.402853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.403052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.403065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.403132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.403145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.403289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.403303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.403386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.403398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.403527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.403540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.403681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.403694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.403764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.403777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.403844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.403860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.403953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.403966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.404053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.404066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.404129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.404141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.404350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.404365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.404459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.404471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.404641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.404654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.404732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.404744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.404875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.404888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.891 qpair failed and we were unable to recover it. 00:27:44.891 [2024-12-09 17:38:11.404970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.891 [2024-12-09 17:38:11.404982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.405116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.405129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.405197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.405210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.405352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.405366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.405444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.405457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.405527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.405539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.405610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.405623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.405765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.405778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.405920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.405933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.406097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.406111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.406203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.406216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.406363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.406376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.406452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.406464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.406549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.406562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.406640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.406667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.406756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.406774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.406858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.406875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.406959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.406977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.407125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.407145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.407403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.407442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.407676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.407713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.407906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.407937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.408176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.408208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.408405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.408437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.408611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.408629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.408770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.408787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.408890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.408921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.409126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.409156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.409384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.409416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.409609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.409640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.409829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.409861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.409967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.410004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.410211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.410244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.410505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.410535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.410768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.410800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.410919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.410951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.411136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.411177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.411389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.411421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.411533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.411564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.892 qpair failed and we were unable to recover it. 00:27:44.892 [2024-12-09 17:38:11.411665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.892 [2024-12-09 17:38:11.411696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.411871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.411888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.411976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.411993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.412230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.412249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.412415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.412432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.412522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.412540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.412682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.412700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.412872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.412903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.413073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.413104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.413273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.413306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.413470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.413488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.413588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.413606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.413766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.413783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.413938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.413968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.414205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.414237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.414428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.414468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.414692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.414709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.414814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.414831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.414922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.414940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.415094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.415114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.415295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.415327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.415442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.415473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.415588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.415619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.415791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.415821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.415981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.415999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.416258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.416290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.416469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.416501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.416692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.416723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.416957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.416988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.417188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.417222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.417339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.417370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.417609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.417640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.417815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.417852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.418035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.418066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.418187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.418221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.418406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.418437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.418625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.418656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.418868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.418900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.419003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.419035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.893 qpair failed and we were unable to recover it. 00:27:44.893 [2024-12-09 17:38:11.419205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.893 [2024-12-09 17:38:11.419238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.419425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.419456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.419647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.419678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.419967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.419997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.420216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.420249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.420421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.420452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.420737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.420768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.420907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.420939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.421116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.421148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.421347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.421379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.421561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.421592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.421852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.421882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.422118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.422150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.422440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.422471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.422724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.422755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:44.894 [2024-12-09 17:38:11.422938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.894 [2024-12-09 17:38:11.422969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:44.894 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.423147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.423191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.423382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.423413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.423538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.423569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.423737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.423768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.423943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.423976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.424162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.424207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.424401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.424433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.424563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.424595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.424706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.424737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.424851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.424882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.424998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.425028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.425138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.425179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.425289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.425320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.425492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.425522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.425761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.425792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.425980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.426011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.426201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.426234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.426427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.426465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.426723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.426754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.426928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.426959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.427221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.427253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.427436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.427467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.427702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.427733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.427912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.427943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.428113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.428144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.428393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.171 [2024-12-09 17:38:11.428424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.171 qpair failed and we were unable to recover it. 00:27:45.171 [2024-12-09 17:38:11.428623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.428655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.428772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.428802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.429060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.429092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.429217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.429250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.429366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.429397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.429585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.429617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.429788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.429819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.429931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.429961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.430142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.430184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.430348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.430379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.430500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.430531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.430769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.430799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.431060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.431091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.431279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.431313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.431486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.431518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.431710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.431741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.431851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.431882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.432052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.432083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.432303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.432337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.432602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.432633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.432815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.432847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.432960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.432991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.433122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.433153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.433426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.433459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.433697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.433728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.433858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.433888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.434133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.434164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.434291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.434323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.434509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.434540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.434722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.434753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.435012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.435044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.435286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.435318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.435495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.435526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.435645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.435677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.435914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.435944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.436068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.436099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.436339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.436371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.436562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.436594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.436868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.172 [2024-12-09 17:38:11.436899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.172 qpair failed and we were unable to recover it. 00:27:45.172 [2024-12-09 17:38:11.437178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.437210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.437390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.437422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.437552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.437583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.437849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.437880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.438048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.438080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.438200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.438233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.438363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.438394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.438620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.438649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.438819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.438850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.438965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.438995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.439202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.439235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.439438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.439470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.439582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.439614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.439894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.439925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.440204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.440236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.440357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.440388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.440665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.440696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.440930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.440962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.441218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.441253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.441383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.441420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.441523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.441554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.441743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.441774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.441957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.441988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.442159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.442202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.442393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.442424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.442536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.442567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.442800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.442831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.443036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.443067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.443237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.443270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.443372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.443403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.443570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.443601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.443811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.443842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.443973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.444003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.444245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.444278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.444398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.444429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.444620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.444650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.444831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.444861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.445046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.445076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.445246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.173 [2024-12-09 17:38:11.445278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.173 qpair failed and we were unable to recover it. 00:27:45.173 [2024-12-09 17:38:11.445522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.445553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.445684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.445715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.445954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.445984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.446194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.446227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.446420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.446450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.446569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.446600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.446773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.446803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.446994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.447026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.447151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.447193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.447373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.447403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.447599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.447630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.447813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.447844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.447947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.447978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.448077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.448108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.448400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.448433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.448550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.448580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.448703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.448734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.448975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.449007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.449245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.449278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.449392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.449424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.449544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.449587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.449723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.449753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.449950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.449981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.450093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.450124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.450257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.450289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.450552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.450583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.450699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.450730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.450923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.450955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.451185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.451217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.451397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.451428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.451597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.451628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.451753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.451784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.451952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.451984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.452101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.452132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.452354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.452386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.452652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.452683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.452851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.452882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.453119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.453150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.453368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.453400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.174 qpair failed and we were unable to recover it. 00:27:45.174 [2024-12-09 17:38:11.453660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.174 [2024-12-09 17:38:11.453691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.453952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.453982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.454150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.454193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.454441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.454471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.454641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.454672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.454874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.454904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.455036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.455066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.455190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.455226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.455405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.455436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.455694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.455726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.455834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.455864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.456075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.456105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.456228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.456260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.456386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.456416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.456622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.456653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.456826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.456856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.456988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.457018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.457121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.457152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.457264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.457295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.457419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.457449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.457577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.457607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.457725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.457761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.457931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.457961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.458180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.458212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.458420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.458452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.458661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.458691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.458810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.458840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.459072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.459103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.459213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.459246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.459450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.459481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.459607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.459637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.459768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.459800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.459922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.459953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.460187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.175 [2024-12-09 17:38:11.460218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.175 qpair failed and we were unable to recover it. 00:27:45.175 [2024-12-09 17:38:11.460400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.460432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.460612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.460643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.460901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.460931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.461097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.461128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.461243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.461276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.461479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.461510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.461680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.461710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.461827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.461857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.462039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.462070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.462341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.462374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.462476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.462507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.462687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.462718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.462895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.462925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.463104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.463135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.463394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.463426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.463596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.463627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.463808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.463839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.464017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.464048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.464218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.464252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.464464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.464495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.464600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.464630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.464753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.464783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.464950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.464980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.465115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.465147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.465374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.465405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.465594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.465626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.465822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.465853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.465982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.466019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.466280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.466313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.466575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.466605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.466848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.466880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.466989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.467020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.467125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.467156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.467353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.467385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.467487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.467519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.467721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.467752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.467924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.467955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.468216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.468249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.468381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.176 [2024-12-09 17:38:11.468413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.176 qpair failed and we were unable to recover it. 00:27:45.176 [2024-12-09 17:38:11.468621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.468652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.468888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.468919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.469158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.469200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.469382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.469413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.469695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.469727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.469906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.469937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.470105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.470137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.470348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.470380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.470557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.470588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.470774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.470805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.470992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.471023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.471202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.471235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.471440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.471471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.471672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.471703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.471964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.471995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.472118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.472150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.472350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.472381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.472569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.472600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.472718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.472749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.472933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.472963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.473143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.473207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.473383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.473415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.473583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.473613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.473797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.473828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.474015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.474045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.474159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.474204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.474325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.474355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.474523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.474554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.474744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.474781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.474969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.474999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.475202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.475235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.475480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.475511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.475687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.475718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.475904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.475935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.476114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.476145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.476400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.476432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.476605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.476636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.476834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.476866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.476994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.177 [2024-12-09 17:38:11.477024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.177 qpair failed and we were unable to recover it. 00:27:45.177 [2024-12-09 17:38:11.477163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.477211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.477399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.477431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.477618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.477649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.477834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.477866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.478032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.478063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.478262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.478294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.478485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.478516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.478630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.478662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.478864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.478895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.479188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.479221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.479404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.479436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.479622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.479653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.479773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.479803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.479925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.479956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.480148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.480189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.480393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.480425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.480559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.480591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.480775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.480806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.480940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.480970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.481204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.481237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.481474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.481505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.481621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.481651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.481853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.481883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.482147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.482189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.482423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.482454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.482566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.482596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.482767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.482798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.482904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.482935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.483110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.483141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.483439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.483477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.483667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.483698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.483901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.483932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.484118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.484150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.484299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.484330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.484571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.484600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.484788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.484819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.485057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.485088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.485300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.485335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.485517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.178 [2024-12-09 17:38:11.485548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.178 qpair failed and we were unable to recover it. 00:27:45.178 [2024-12-09 17:38:11.485786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.485817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.485999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.486031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.486234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.486267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.486535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.486566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.486756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.486789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.487025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.487056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.487183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.487213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.487388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.487418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.487624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.487654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.487842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.487873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.488056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.488087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.488193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.488225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.488421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.488452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.488637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.488669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.488866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.488896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.489018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.489049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.489222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.489256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.489513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.489545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.489721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.489752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.489948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.489978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.490150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.490192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.490379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.490408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.490595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.490626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.490801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.490832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.491023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.491054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.491245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.491278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.491530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.491561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.491746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.491778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.491891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.491921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.492101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.492133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.492400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.492443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.492642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.492673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.492925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.492956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.493134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.493184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.493311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.493343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.493462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.493492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.493664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.493694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.493897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.493928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.494191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.494224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.179 qpair failed and we were unable to recover it. 00:27:45.179 [2024-12-09 17:38:11.494352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.179 [2024-12-09 17:38:11.494384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.494508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.494538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.494707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.494737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.494903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.494934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.495035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.495066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.495204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.495236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.495492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.495522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.495724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.495753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.496058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.496089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.496296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.496328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.496512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.496542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.496714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.496746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.496911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.496942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.497128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.497159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.497408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.497440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.497556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.497587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.497852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.497884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.498154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.498198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.498451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.498483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.498595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.498625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.498824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.498855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.499104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.499135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.499332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.499364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.499556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.499587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.499794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.499825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.500018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.500050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.500218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.500251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.500493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.500524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.500762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.500793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.500908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.500939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.501126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.501157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.501425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.501463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.501699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.501730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.501921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.501951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.502133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.180 [2024-12-09 17:38:11.502163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.180 qpair failed and we were unable to recover it. 00:27:45.180 [2024-12-09 17:38:11.502362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.502393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.502635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.502665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.502884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.502914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.503038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.503068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.503250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.503281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.503455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.503486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.503615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.503646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.503888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.503918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.504044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.504075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.504278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.504312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.504507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.504538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.504657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.504687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.504864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.504895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.505008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.505039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.505230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.505262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.505445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.505476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.505604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.505636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.505756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.505786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.506025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.506056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.506186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.506220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.506386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.506417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.506688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.506718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.506849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.506880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.507052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.507084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.507263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.507296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.507402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.507433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.507561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.507591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.507828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.507859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.508119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.508150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.508359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.508390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.508558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.508590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.508834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.508865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.509103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.509135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.509324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.509357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.509644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.509675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.509885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.509916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.510104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.510141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.510392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.510424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.510682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.181 [2024-12-09 17:38:11.510712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.181 qpair failed and we were unable to recover it. 00:27:45.181 [2024-12-09 17:38:11.510845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.510875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.511009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.511039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.511297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.511330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.511566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.511597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.511772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.511802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.511970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.512001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.512189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.512222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.512398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.512429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.512539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.512570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.512831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.512862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.513068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.513099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.513287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.513320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.513486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.513516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.513638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.513668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.513855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.513887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.513989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.514019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.514207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.514238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.514433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.514464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.514743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.514774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.515008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.515040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.515237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.515270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.515459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.515490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.515661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.515692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.515859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.515889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.516065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.516096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.516353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.516384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.516504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.516536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.516716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.516747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.516925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.516956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.517083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.517113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.517297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.517330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.517509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.517541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.517656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.517687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.517856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.517888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.518055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.518086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.518301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.518335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.518520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.518551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.518850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.518887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.519005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.182 [2024-12-09 17:38:11.519035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.182 qpair failed and we were unable to recover it. 00:27:45.182 [2024-12-09 17:38:11.519150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.519192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.519450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.519482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.519750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.519781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.520015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.520045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.520219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.520252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.520366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.520396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.520564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.520594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.520697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.520728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.520917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.520948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.521117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.521147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.521287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.521318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.521528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.521560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.521747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.521779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.521902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.521932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.522112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.522144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.522276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.522308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.522643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.522673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.522848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.522879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.523058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.523088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.523362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.523395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.523679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.523710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.523844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.523876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.524113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.524144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.524347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.524378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.524512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.524543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.524723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.524753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.524885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.524916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.525116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.525148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.525415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.525448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.525617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.525647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.525830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.525860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.526030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.526060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.526309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.526341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.526457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.526487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.526612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.526643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.526811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.526841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.527024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.527055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.527254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.527287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.527455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.527492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.183 qpair failed and we were unable to recover it. 00:27:45.183 [2024-12-09 17:38:11.527726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.183 [2024-12-09 17:38:11.527757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.527940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.527971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.528210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.528243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.528509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.528540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.528745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.528776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.528962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.528993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.529236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.529268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.529452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.529484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.529600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.529631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.529875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.529906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.530081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.530113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.530295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.530326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.530441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.530472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.530664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.530696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.530820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.530850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.531038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.531069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.531282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.531314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.531492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.531523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.531692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.531723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.531922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.531953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.532119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.532149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.532357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.532387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.532497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.532528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.532709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.532740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.533004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.533036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.533225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.533257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.533388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.533420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.533669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.533700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.533878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.533908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.534085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.534116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.534385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.534417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.534534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.534565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.534691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.534721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.534840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.534871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.184 qpair failed and we were unable to recover it. 00:27:45.184 [2024-12-09 17:38:11.534997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.184 [2024-12-09 17:38:11.535027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.535290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.535323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.535513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.535544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.535646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.535676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.535856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.535887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.536156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.536218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.536412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.536443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.536625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.536656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.536915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.536947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.537212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.537245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.537366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.537396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.537579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.537609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.537792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.537823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.537996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.538027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.538287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.538319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.538563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.538593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.538811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.538842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.538957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.538988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.539228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.539262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.539393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.539424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.539618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.539648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.539856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.539886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.540018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.540050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.540241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.540274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.540538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.540569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.540669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.540700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.540888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.540917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.541094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.541126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.541251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.541283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.541473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.541504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.541687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.541716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.541890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.541920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.542165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.542206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.542326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.542358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.542476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.542506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.542747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.542777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.542966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.542996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.543106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.543136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.543328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.185 [2024-12-09 17:38:11.543360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.185 qpair failed and we were unable to recover it. 00:27:45.185 [2024-12-09 17:38:11.543532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.543562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.543674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.543704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.543965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.543997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.544254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.544287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.544543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.544574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.544752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.544784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.545044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.545075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.545254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.545287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.545524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.545555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.545725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.545757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.545955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.545985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.546154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.546197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.546313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.546344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.546463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.546493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.546673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.546703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.546886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.546917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.547102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.547133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.547325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.547358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.547613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.547644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.547906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.547937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.548139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.548181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.548348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.548380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.548495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.548526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.548736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.548768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.548948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.548979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.549112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.549144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.549300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.549332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.549543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.549573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.549690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.549721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.549961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.549992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.550195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.550228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.550405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.550436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.550636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.550667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.550856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.550892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.551153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.551193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.551442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.551473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.551672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.551702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.551953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.551984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-12-09 17:38:11.552222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.186 [2024-12-09 17:38:11.552255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.552429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.552459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.552645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.552676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.552870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.552902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.553141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.553184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.553407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.553438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.553620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.553652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.553891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.553921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.554034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.554065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.554189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.554222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.554479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.554510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.554795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.554826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.555089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.555120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.555363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.555395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.555514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.555546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.555730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.555761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.555932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.555963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.556143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.556180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.556348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.556379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.556615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.556645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.556753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.556785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.556976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.557005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.557255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.557289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.557470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.557500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.557621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.557651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.557858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.557889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.558059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.558090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.558258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.558291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.558462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.558492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.558625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.558657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.558772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.558802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.559012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.559042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.559300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.559332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.559449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.559479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.559649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.559681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.559795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.559831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.560010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.560041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.560235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.560269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.560395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.560427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.560556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.187 [2024-12-09 17:38:11.560587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.187 qpair failed and we were unable to recover it. 00:27:45.187 [2024-12-09 17:38:11.560779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.560810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.561052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.561085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.561268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.561301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.561484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.561515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.561644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.561676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.561847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.561877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.562059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.562090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.562291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.562325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.562569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.562600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.562847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.562878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.563086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.563118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.563305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.563336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.563573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.563605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.563774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.563805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.564054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.564085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.564255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.564288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.564473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.564502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.564690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.564721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.564908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.564938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.565121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.565152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.565445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.565477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.565737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.565769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.566012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.566043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.566240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.566273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.566477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.566509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.566686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.566717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.566852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.566881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.567072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.567103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.567339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.567371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.567606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.567637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.567817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.567847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.568093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.568124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.568307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.568339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.568597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.568628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.568885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.568915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.569104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.569142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.569338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.569369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.569608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.569639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.188 [2024-12-09 17:38:11.569820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.188 [2024-12-09 17:38:11.569852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.188 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.570065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.570096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.570276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.570309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.570492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.570522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.570701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.570731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.570866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.570896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.571137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.571177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.571356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.571387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.571645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.571676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.571887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.571918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.572117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.572148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.572350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.572383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.572505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.572535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.572702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.572732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.572993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.573025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.573152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.573212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.573385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.573416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.573534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.573564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.573768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.573799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.573979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.574010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.574123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.574153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.574363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.574394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.574570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.574600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.574721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.574751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.574866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.574898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.575079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.575111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.575290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.575322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.575437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.575467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.575658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.575690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.575805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.575835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.576016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.576046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.576188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.576221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.576456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.576487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.576687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.576719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.576839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.189 [2024-12-09 17:38:11.576869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.189 qpair failed and we were unable to recover it. 00:27:45.189 [2024-12-09 17:38:11.577069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.577099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.577274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.577307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.577478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.577515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.577805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.577837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.577974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.578006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.578198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.578231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.578332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.578362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.578486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.578518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.578758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.578788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.579002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.579034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.579322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.579356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.579547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.579579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.579842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.579873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.580068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.580098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.580287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.580320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.580570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.580601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.580743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.580774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.580955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.580984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.581084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.581114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.581333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.581366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.581578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.581610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.581776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.581806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.581923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.581954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.582204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.582237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.582420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.582450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.582574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.582605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.582798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.582828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.583013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.583044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.583152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.583192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.583368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.583400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.583513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.583543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.583729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.583760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.583937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.583966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.584221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.584254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.584382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.584413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.584584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.584615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.584789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.584819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.585001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.585033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.585202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.190 [2024-12-09 17:38:11.585235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.190 qpair failed and we were unable to recover it. 00:27:45.190 [2024-12-09 17:38:11.585484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.585515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.585684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.585715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.585946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.585977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.586234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.586273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.586479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.586510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.586641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.586671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.586793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.586824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.587008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.587039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.587278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.587311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.587427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.587457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.587650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.587680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.587926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.587958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.588132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.588163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.588369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.588400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.588576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.588607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.588805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.588836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.589083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.589115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.589414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.589447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.589630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.589660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.589848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.589878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.590075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.590105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.590370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.590402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.590653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.590685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.590884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.590914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.591045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.591075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.591250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.591283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.591392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.591423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.591621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.591652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.591863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.591893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.592066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.592098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.592291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.592323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.592560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.592591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.592758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.592788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.593001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.593030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.593204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.593237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.593425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.593455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.593644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.593675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.593859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.593889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.594131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.191 [2024-12-09 17:38:11.594162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.191 qpair failed and we were unable to recover it. 00:27:45.191 [2024-12-09 17:38:11.594285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.594317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.594500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.594532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.594790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.594822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.594935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.594965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.595137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.595184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.595392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.595423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.595662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.595692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.595892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.595921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.596129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.596159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.596386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.596418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.596592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.596622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.596829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.596860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.597050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.597081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.597210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.597243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.597409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.597438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.597545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.597576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.597835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.597865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.597982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.598012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.598211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.598243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.598436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.598466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.598725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.598757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.598997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.599028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.599146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.599188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.599308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.599339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.599447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.599477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.599665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.599696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.599882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.599913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.600087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.600116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.600230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.600262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.600431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.600462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.600630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.600662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.600901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.600933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.601138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.601181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.601363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.601394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.601511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.601542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.601661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.601692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.601892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.601923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.602040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.602070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.602245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.602278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.192 qpair failed and we were unable to recover it. 00:27:45.192 [2024-12-09 17:38:11.602399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.192 [2024-12-09 17:38:11.602429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.602600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.602630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.602766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.602797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.602912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.602942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.603131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.603160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.603364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.603400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.603607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.603639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.603827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.603858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.604042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.604072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.604276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.604309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.604417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.604448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.604623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.604655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.604778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.604810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.604940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.604970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.605147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.605187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.605437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.605469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.605637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.605669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.605761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.605791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.605894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.605924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.606046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.606078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.606273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.606305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.606433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.606463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.606699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.606730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.606903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.606934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.607121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.607153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.607346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.607379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.607555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.607585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.607770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.607800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.608033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.608065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.608233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.608265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.608397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.608428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.608592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.608623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.608895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.608927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.609102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.609133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.609310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.193 [2024-12-09 17:38:11.609342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.193 qpair failed and we were unable to recover it. 00:27:45.193 [2024-12-09 17:38:11.609512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.609543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.609747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.609779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.609964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.609995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.610164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.610207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.610327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.610359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.610462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.610493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.610672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.610703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.610832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.610862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.611119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.611151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.611286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.611319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.611567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.611604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.611776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.611807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.612027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.612059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.612241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.612275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.612464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.612494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.612723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.612753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.612958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.612989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.613184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.613216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.613317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.613349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.613588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.613620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.613724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.613754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.613949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.613979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.614083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.614114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.614299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.614331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.614570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.614602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.614873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.614904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.615019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.615049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.615311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.615343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.615458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.615488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.615671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.615703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.615806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.615835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.616113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.616145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.616351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.616382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.616640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.616671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.616849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.616879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.617077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.617108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.617256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.617287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.617410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.617442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.617628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.194 [2024-12-09 17:38:11.617658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.194 qpair failed and we were unable to recover it. 00:27:45.194 [2024-12-09 17:38:11.617836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.617866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.618131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.618161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.618358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.618390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.618559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.618589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.618759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.618791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.618907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.618937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.619116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.619149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.619325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.619357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.619564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.619595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.619829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.619860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.620103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.620135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.620332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.620370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.620493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.620524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.620691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.620721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.620923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.620955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.621134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.621176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.621367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.621399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.621651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.621682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.621958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.621989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.622113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.622144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.622327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.622359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.622580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.622610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.622775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.622806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.622919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.622950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.623059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.623090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.623289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.623321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.623491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.623522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.623646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.623676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.623793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.623823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.623992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.624022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.624126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.624157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.624294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.624325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.624511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.624541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.624727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.624759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.625018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.625048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.625215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.625248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.625425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.625456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.625561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.625591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.625780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.195 [2024-12-09 17:38:11.625812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.195 qpair failed and we were unable to recover it. 00:27:45.195 [2024-12-09 17:38:11.625938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.625969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.626065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.626095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.626209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.626242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.626493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.626524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.626697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.626728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.626921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.626951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.627121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.627151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.627280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.627312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.627502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.627533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.627654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.627684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.627799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.627830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.628068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.628099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.628270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.628309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.628424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.628455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.628715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.628746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.628981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.629011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.629203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.629236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.629359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.629390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.629596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.629626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.629818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.629847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.629980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.630010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.630190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.630224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.630431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.630466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.630648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.630679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.630858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.630889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.631077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.631107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.631358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.631390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.631513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.631543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.631746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.631776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.632037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.632066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.632260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.632293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.632477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.632508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.632610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.632640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.632828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.632858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.633029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.633061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.633237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.633270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.633398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.633428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.633618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.633649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.633855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.633886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.196 qpair failed and we were unable to recover it. 00:27:45.196 [2024-12-09 17:38:11.634008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.196 [2024-12-09 17:38:11.634040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.634276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.634308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.634426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.634455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.634577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.634607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.634723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.634753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.634872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.634902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.635088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.635118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.635249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.635281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.635455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.635485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.635662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.635691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.635859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.635890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.636058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.636088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.636204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.636236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.636404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.636443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.636561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.636592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.636711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.636740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.637024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.637056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.637241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.637274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.637486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.637517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.637697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.637728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.637869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.637899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.638072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.638104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.638301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.638334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.638569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.638601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.638723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.638753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.638925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.638956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.639204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.639236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.639524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.639555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.639743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.639774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.639897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.639927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.640108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.640139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.640293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.640326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.640506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.640538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.640640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.640670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.640782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.640814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.197 qpair failed and we were unable to recover it. 00:27:45.197 [2024-12-09 17:38:11.640997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.197 [2024-12-09 17:38:11.641028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.641145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.641187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.641308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.641339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.641473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.641503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.641739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.641769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.641890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.641921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.642026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.642057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.642230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.642262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.642439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.642470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.642598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.642629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.642818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.642850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.643024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.643056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.643195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.643227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.643421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.643454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.643638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.643669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.643782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.643812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.644000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.644032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.644158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.644202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.644381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.644419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.644607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.644640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.644817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.644848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.644963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.644994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.645183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.645216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.645517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.645549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.645725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.645757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.645969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.645999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.646184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.646217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.646353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.646385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.646495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.646525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.646732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.646764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.646873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.646904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.647084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.647116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.647250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.647283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.647449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.647481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.647648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.647680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.647864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.647895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.647998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.648029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.648143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.648184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.648395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.648426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.198 qpair failed and we were unable to recover it. 00:27:45.198 [2024-12-09 17:38:11.648646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.198 [2024-12-09 17:38:11.648678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.648801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.648831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.649019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.649050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.649187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.649220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.649337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.649368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.649603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.649635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.649746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.649779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.649964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.649994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.650109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.650139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.650339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.650371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.650568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.650600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.650720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.650750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.650989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.651021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.651152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.651196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.651316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.651347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.651558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.651590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.651761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.651791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.651961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.651991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.652121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.652152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.652283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.652321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.652514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.652544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.652721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.652752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.652859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.652888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.653030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.653059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.653162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.653206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.653407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.653439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.653553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.653582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.653789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.653820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.654015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.654046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.654220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.654253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.654422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.654454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.654630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.654660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.654833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.654863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.655046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.655077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.655339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.655371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.655582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.655613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.655816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.655848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.656028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.656057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.656244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.656277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.199 [2024-12-09 17:38:11.656450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.199 [2024-12-09 17:38:11.656481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.199 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.656659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.656690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.656808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.656838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.657019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.657051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.657221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.657254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.657439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.657471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.657576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.657606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.657799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.657836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.658025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.658055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.658207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.658239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.658354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.658385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.658508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.658541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.658726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.658758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.658938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.658969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.659089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.659119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.659307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.659340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.659472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.659503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.659671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.659701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.659880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.659912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.660020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.660051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.660174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.660206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.660331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.660364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.660538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.660570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.660743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.660773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.660903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.660934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.661121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.661153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.661350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.661382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.661484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.661514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.661692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.661724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.662026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.662058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.662212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.662246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.662375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.662407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.662537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.662567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.662681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.662712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.662961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.662993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.663186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.663218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.663417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.663448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.663760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.663791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.663975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.664006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.200 [2024-12-09 17:38:11.664196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.200 [2024-12-09 17:38:11.664230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.200 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.664431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.664463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.664647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.664678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.664792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.664822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.664996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.665029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.665220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.665253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.665454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.665485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.665752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.665784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.665950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.665986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.666180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.666213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.666399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.666429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.666600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.666632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.666932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.666963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.667226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.667259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.667369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.667400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.667568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.667599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.667784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.667816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.667936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.667967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.668179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.668212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.668331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.668362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.668606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.668638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.668754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.668785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.668926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.668958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.669138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.669178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.669297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.669328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.669509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.669542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.669733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.669764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.669942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.669974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.670087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.670118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.670255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.670286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.670390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.670421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.670612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.670642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.670764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.670794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.671057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.671089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.671277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.671313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.671515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.671546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.671780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.671811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.671926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.671958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.672135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.672175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.201 [2024-12-09 17:38:11.672349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.201 [2024-12-09 17:38:11.672380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.201 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.672579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.672612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.672716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.672746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.672916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.672947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.673121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.673151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.673353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.673384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.673563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.673593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.673802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.673834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.674001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.674031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.674136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.674195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.674404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.674435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.674640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.674670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.674796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.674826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.675014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.675045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.675178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.675211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.675404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.675436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.675558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.675588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.675769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.675801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.675921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.675951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.676126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.676157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.676339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.676370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.676583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.676615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.676719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.676750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.676944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.676976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.677241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.677274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.677447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.677478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.677663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.677694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.677862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.677892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.678154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.678196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.678383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.678415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.678605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.678636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.678753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.678784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.678971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.679002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.679206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.679241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.679354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.679385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.202 qpair failed and we were unable to recover it. 00:27:45.202 [2024-12-09 17:38:11.679490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.202 [2024-12-09 17:38:11.679520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.679720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.679752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.679955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.679985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.680124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.680156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.680351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.680384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.680620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.680651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.680827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.680857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.681027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.681057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.681577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.681611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.681813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.681845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.682026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.682057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.682316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.682348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.682467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.682499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.682675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.682706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.682894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.682932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.683052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.683082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.683262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.683296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.683494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.683525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.683704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.683736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.683912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.683942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.684123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.684153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.684338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.684370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.684559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.684590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.684827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.684859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.684968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.685000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.685205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.685239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.685480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.685511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.685687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.685718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.685917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.685949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.686117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.686149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.686552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.686585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.686769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.686800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.687038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.687068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.687190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.687223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.687502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.687533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.687705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.687736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.687941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.687972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.688147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.688185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.688401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.688432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.203 [2024-12-09 17:38:11.688618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.203 [2024-12-09 17:38:11.688649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.203 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.688755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.688787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.688965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.688997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.689108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.689139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.689329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.689363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.689544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.689576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.689705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.689736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.689849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.689880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.690003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.690036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.690207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.690241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.690410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.690440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.690607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.690639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.690826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.690857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.691048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.691079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.691314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.691347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.691521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.691559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.691795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.691827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.692006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.692038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.692287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.692320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.692500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.692531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.692766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.692798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.693032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.204 [2024-12-09 17:38:11.693064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.204 qpair failed and we were unable to recover it. 00:27:45.204 [2024-12-09 17:38:11.693233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.693267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.693452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.693483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.693710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.693741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.693913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.693945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.694125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.694156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.694379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.694411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.694676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.694708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.694834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.694866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.694993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.695025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.695267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.695301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.695418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.695449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.695621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.695651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.695768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.695806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.696056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.696088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.696284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.696317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.696437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.696468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.696639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.696670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.696843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-12-09 17:38:11.696874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-12-09 17:38:11.697076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.697108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.697313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.697346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.697472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.697504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.697642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.697672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.697881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.697912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.698110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.698141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.698338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.698370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.698480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.698511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.698630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.698661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.698776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.698808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.699021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.699052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.699218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.699253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.699428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.699460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.699702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.699734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.699901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.699933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.700032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.700069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.700176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.700209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.700397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.700429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.700609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.700640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.700810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.700842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.701037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.701068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.701252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.701285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.701459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.701490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.701606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.701637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.701876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.701908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.702088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.702119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.702273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.702307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.702412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.702444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.702624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.702656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.702858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.702890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.703105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.703136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.703356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.703391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.703572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.703603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.703727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.703759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.703935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.703966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.704078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.704109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.704319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.704352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.704618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.704649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.704747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-12-09 17:38:11.704777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-12-09 17:38:11.704983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.705014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.705196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.705229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.705330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.705361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.705475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.705507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.705702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.705732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.705976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.706006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.706122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.706152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.706427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.706459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.706629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.706662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.706830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.706860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.706980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.707010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.707131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.707161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.707432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.707464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.707699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.707731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.707846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.707878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.708057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.708088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.708198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.708237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.708428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.708460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.708656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.708687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.708889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.708921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.709100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.709131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.709267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.709298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.709486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.709519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.709729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.709758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.709862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.709894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.710071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.710102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.710237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.710269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.710494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.710526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.710764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.710797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.710988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.711017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.711222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.711255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.711447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.711479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.711600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.711631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.711756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.711786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.711900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.711933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.712105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.712136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.712336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.712368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.712493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.712524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.712680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-12-09 17:38:11.712713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-12-09 17:38:11.712972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.713003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.713189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.713222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.713401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.713431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.713669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.713701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.713833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.713864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.714107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.714139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.714321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.714353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.714542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.714573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.714835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.714866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.715071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.715103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.715215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.715248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.715510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.715542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.715724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.715755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.715991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.716022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.716192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.716224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.716434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.716464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.716722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.716754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.717044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.717082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.717322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.717356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.717459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.717489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.717732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.717763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.717960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.717991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.718103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.718133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.718342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.718376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.718561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.718593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.718758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.718789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.718910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.718941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.719107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.719138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.719264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.719296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.719476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.719506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.719677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.719709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.719979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.720012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.720148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.720191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.720397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.720429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.720621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.720654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.720802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.720832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.721035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.721067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.721266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-12-09 17:38:11.721300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-12-09 17:38:11.721547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.721579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.721727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.721758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.721925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.721957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.722225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.722257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.722444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.722476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.722657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.722689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.722808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.722840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.723019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.723050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.723186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.723221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.723396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.723432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.723550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.723580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.723747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.723779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.723946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.723978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.724241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.724275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.724373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.724403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.724662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.724695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.724881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.724912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.725082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.725115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.725373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.725405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.725505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.725547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.725644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.725675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.725925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.725958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.726202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.726235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.726348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.726379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.726589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.726623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.726740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.726770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.726957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.726989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.727201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.727234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.727437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.727469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.727584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.727616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.727789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.727820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.727987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.728019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.728204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.728247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.728381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.728413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.728579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-12-09 17:38:11.728610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-12-09 17:38:11.728869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.728899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.729035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.729067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.729278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.729312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.729434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.729465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.729701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.729732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.729859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.729890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.730065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.730096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.730289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.730321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.730449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.730482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.730668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.730699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.730939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.730971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.731157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.731202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.731388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.731419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.731599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.731630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.731726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.731758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.731954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.731985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.732223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.732256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.732432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.732464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.732729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.732761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.732871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.732902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.733026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.733058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.733243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.733276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.733450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.733482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.733668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.733700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.733939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.733977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.734152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.734196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.734302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.734331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.734529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.734559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.734730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.734767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.734881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.734912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.735023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.735054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.735316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.735350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.735518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.735549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.735720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.735752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.735993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.736024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.736195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.736228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.736419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.736450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-12-09 17:38:11.736618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-12-09 17:38:11.736649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.736824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.736855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.737040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.737071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.737299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.737333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.737521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.737551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.737725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.737756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.737925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.737955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.738220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.738253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.738446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.738477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.738657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.738688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.738952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.738983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.739112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.739143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.739269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.739299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.739478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.739509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.739792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.739823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.740018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.740048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.740250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.740281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.740416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.740446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.740712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.740743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.740981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.741012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.741257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.741290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.741412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.741443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.741622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.741654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.741886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.741917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.742101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.742131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.742332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.742364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.742548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.742578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.742816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.742853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.743024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.743056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.743228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.743261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.743506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.743537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.743766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.743797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.744006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.744037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.744232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.744265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.744452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.744484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.744758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.744789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.744972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.745003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.745158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.745197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.745324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.745355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-12-09 17:38:11.745546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-12-09 17:38:11.745578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.745715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.745746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.745863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.745894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.746156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.746216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.746516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.746547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.746733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.746765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.746952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.746983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.747158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.747200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.747332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.747363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.747474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.747504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.747703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.747735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.747849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.747879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.748068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.748098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.748360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.748393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.748578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.748608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.748872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.748943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.749084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.749120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.749347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.749382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.749551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.749582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.749817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.749848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.750087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.750118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.750263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.750297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.750433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.750465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.750728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.750759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.750994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.751026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.751191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.751225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.751405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.751436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.751544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.751576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.751838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.751879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.752152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.752192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.752368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.752400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.752574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.752606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.752863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.752895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.753060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.753092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.753349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.753382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.753565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.753596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.753712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.753744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.753871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.753903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.754100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.754132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-12-09 17:38:11.754325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-12-09 17:38:11.754358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.754543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.754574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.754839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.754870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.755087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.755119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.755235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.755268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.755509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.755541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.755661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.755692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.755918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.755950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.756072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.756104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.756219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.756251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.756375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.756408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.756597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.756629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.756877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.756909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.757021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.757053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.757230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.757264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.757381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.757412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.757642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.757712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.757926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.757961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.758085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.758117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.758373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.758407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.758622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.758653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.758838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.758869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.758984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.759015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.759130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.759161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.759279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.759311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.759437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.759469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.759655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.759687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.759948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.759980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.760149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.760193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.760330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.760371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.760634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.760665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.760886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.760917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.761095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.761127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.761340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.761373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.761559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.761591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.761722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.761753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.761991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.762023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.762287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.762320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.762577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.762608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-12-09 17:38:11.762849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-12-09 17:38:11.762880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.763117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.763148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.763382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.763415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.763537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.763568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.763708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.763740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.763933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.763964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.764083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.764114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.764331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.764364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.764498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.764530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.764759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.764790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.765028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.765059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.765325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.765358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.765568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.765599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.765861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.765893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.766133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.766164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.766441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.766473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.766655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.766687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.766809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.766847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.766978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.767009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.767210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.767244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.767426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.767458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.767671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.767703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.767886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.767917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.768097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.768128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.768332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.768364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.768516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.768548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.768716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.768747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.768985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.769016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.769193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.769226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.769412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.769444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.769567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.769599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.769843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.769874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.770065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.770096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.770351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.770384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.770504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-12-09 17:38:11.770534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-12-09 17:38:11.770711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.770741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.770910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.770941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.771188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.771221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.771465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.771497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.771697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.771728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.771896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.771928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.772109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.772140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.772277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.772309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.772516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.772548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.772730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.772762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.772938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.772969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.773085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.773117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.773327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.773360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.773595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.773625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.773805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.773836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.774002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.774032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.774211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.774245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.774499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.774531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.774771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.774802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.775041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.775072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.775179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.775213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.775388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.775419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.775519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.775556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.775670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.775702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.775904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.775935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.776070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.776102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.776283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.776315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.776508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.776539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.776729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.776761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.776894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.776925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.777158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.777200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.777387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.777418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.777554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.777584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.777771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.777803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.777995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.778026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.778266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.778300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.778480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.778512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.778634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.778665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.778860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-12-09 17:38:11.778891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-12-09 17:38:11.778995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.779027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.779238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.779271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.779488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.779518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.779709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.779740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.780001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.780032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.780255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.780287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.780522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.780554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.780654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.780685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.780952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.780984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.781174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.781217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.781346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.781378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.781495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.781526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.781639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.781671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.781794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.781826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.782013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.782044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.782231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.782264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.782398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.782429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.782630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.782661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.782831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.782862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.782981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.783013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.783215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.783247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.783500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.783531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.783819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.783850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.783962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.783999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.784179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.784212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.784403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.784435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.784667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.784699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.784885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.784916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.785162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.785203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.785412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.785444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.785565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.785595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.785864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.785895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.786013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.786045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.786217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.786251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.786448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.786480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.786691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.786721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.786909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.786940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.787070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.787101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.787279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-12-09 17:38:11.787312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-12-09 17:38:11.787411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.787442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.787642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.787673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.787863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.787895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.788079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.788110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.788295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.788328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.788513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.788544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.788807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.788838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.788971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.789003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.789121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.789153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.789342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.789373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.789562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.789593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.789794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.789825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.789996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.790027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.790153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.790196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.790439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.790470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.790673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.790705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.790822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.790852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.790971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.791002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.791240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.791273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.791511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.791543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.791669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.791700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.791940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.791971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.792082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.792115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.792225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.792258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.792474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.792512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.792612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.792644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.792816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.792847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.793026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.793057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.793227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.793260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.793518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.793549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.793726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.793758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.793950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.793980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.794159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.794212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.794329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.794360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.794573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.794605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.794840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.794871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.795057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.795088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.795291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.795325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.795522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.495 [2024-12-09 17:38:11.795554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.495 qpair failed and we were unable to recover it. 00:27:45.495 [2024-12-09 17:38:11.795704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.795735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.796000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.796032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.796197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.796228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.796484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.796515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.796684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.796716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.796948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.796979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.797157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.797198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.797439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.797471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.797650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.797681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.797859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.797890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.798014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.798046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.798312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.798346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.798469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.798501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.798681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.798712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.798883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.798913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.799093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.799124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.799308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.799341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.799586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.799617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.799904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.799935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.800048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.800078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.800207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.800240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.800357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.800388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.800570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.800601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.800782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.800813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.801011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.801041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.801226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.801271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.801412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.801443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.801728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.801760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.802030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.802061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.802243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.802275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.802464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.802496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.802669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.802701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.802887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.802918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.803122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.803153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.803288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.496 [2024-12-09 17:38:11.803319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.496 qpair failed and we were unable to recover it. 00:27:45.496 [2024-12-09 17:38:11.803505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.803535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.803671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.803702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.803889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.803920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.804187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.804220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.804337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.804369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.804546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.804577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.804700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.804732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.804854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.804886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.805071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.805103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.805318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.805351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.805590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.805622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.805741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.805771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.805976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.806008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.806212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.806245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.806426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.806458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.806650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.806681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.806799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.806831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.807072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.807104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.807279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.807312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.807551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.807582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.807771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.807802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.808064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.808095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.808285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.808318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.808602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.808634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.808822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.808853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.809059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.809090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.809229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.809260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.809545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.809576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.809784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.809816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.810056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.810087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.810270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.810309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.810435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.810465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.810600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.810631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.810804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.810836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.811020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.811052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.811152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.811204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.811326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.811356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.811537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.811568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.811805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.497 [2024-12-09 17:38:11.811836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.497 qpair failed and we were unable to recover it. 00:27:45.497 [2024-12-09 17:38:11.812030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.812061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.812230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.812264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.812453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.812484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.812604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.812635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.812749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.812779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.812923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.812955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.813078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.813110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.813237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.813269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.813458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.813490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.813661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.813692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.813899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.813930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.814037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.814069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.814283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.814316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.814498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.814529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.814769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.814800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.814985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.815016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.815195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.815228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.815398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.815429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.815628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.815660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.815829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.815861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.816036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.816067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.816205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.816239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.816478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.816508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.816787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.816818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.817081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.817112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.817367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.817400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.817588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.817620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.817823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.817853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.818108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.818139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.818404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.818437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.818623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.818654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.818891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.818928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.819056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.819086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.819259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.819293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.819502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.819533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.819702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.819733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.819856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.819887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.820010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.820041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.820179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.498 [2024-12-09 17:38:11.820212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.498 qpair failed and we were unable to recover it. 00:27:45.498 [2024-12-09 17:38:11.820459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.820491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.820605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.820636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.820874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.820905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.821203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.821235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.821433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.821466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.821636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.821666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.821862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.821894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.822077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.822109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.822369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.822402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.822579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.822610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.822789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.822820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.823095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.823126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.823425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.823458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.823643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.823675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.823844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.823874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.824089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.824119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.824311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.824344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.824481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.824513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.824726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.824757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.824891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.824923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.825138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.825178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.825348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.825380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.825631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.825663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.825851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.825882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.826009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.826040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.826279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.826312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.826432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.826464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.826652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.826683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.826867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.826898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.827089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.827120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.827370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.827402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.827575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.827606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.827774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.827811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.827994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.828026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.828213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.828246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.828360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.828391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.828630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.828662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.828859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.828891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.829064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.829095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.499 qpair failed and we were unable to recover it. 00:27:45.499 [2024-12-09 17:38:11.829291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.499 [2024-12-09 17:38:11.829325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.829496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.829526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.829628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.829660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.829771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.829802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.829924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.829954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.830157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.830223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.830345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.830376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.830503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.830535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.830738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.830769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.830989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.831020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.831151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.831194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.831435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.831466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.831655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.831686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.831872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.831904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.832142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.832183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.832286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.832317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.832501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.832532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.832787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.832819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.833030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.833061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.833245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.833279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.833466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.833498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.833684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.833714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.833976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.834007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.834211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.834244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.834414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.834445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.834569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.834600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.834779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.834810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.834923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.834955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.835067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.835097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.835217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.835251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.835360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.835392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.835630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.835661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.835835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.835867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.835997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.836034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.836217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.836251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.836435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.836467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.836705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.836736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.836920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.836952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.837138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.500 [2024-12-09 17:38:11.837179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.500 qpair failed and we were unable to recover it. 00:27:45.500 [2024-12-09 17:38:11.837320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.837351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.837538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.837570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.837809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.837840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.838081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.838112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.838327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.838359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.838581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.838613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.838810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.838841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.839024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.839055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.839245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.839279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.839451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.839482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.839778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.839809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.840067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.840099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.840300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.840333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.840461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.840493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.840676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.840706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.840889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.840921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.841184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.841217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.841398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.841430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.841692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.841723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.841907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.841938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.842133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.842194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.842415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.842448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.842686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.842717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.842896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.842927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.843190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.843223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.843466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.843498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.843671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.843703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.843810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.843842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.844052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.844084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.844268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.844301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.844484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.844515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.844647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.844678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.844806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.844838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.844953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.844984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.845099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.501 [2024-12-09 17:38:11.845141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.501 qpair failed and we were unable to recover it. 00:27:45.501 [2024-12-09 17:38:11.845290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.845322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.845501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.845532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.845741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.845773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.846011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.846042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.846162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.846211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.846314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.846346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.846530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.846561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.846855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.846887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.847073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.847104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.847216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.847250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.847401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.847432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.847610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.847640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.847875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.847907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.848093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.848126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.848315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.848348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.848451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.848483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.848651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.848682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.848865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.848896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.849083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.849114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.849316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.849349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.849531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.849563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.849732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.849762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.849935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.849966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.850133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.850165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.850362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.850393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.850656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.850687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.850870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.850901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.851030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.851062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.851249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.851282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.851470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.851502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.851738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.851770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.851942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.851973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.852140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.852180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.852312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.852345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.852515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.852547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.852715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.852746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.852924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.852955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.853130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.853162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.853365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.502 [2024-12-09 17:38:11.853397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.502 qpair failed and we were unable to recover it. 00:27:45.502 [2024-12-09 17:38:11.853563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.853601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.853769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.853800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.853935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.853966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.854155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.854217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.854479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.854511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.854641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.854672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.854907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.854938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.855199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.855231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.855503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.855534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.855721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.855751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.855876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.855907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.856142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.856181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.856432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.856464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.856634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.856665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.856905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.856935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.857217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.857250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.857377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.857409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.857677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.857709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.857836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.857867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.857985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.858016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.858207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.858240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.858376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.858407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.858538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.858569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.858688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.858719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.858907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.858938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.859116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.859147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.859295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.859328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.859534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f340f0 is same with the state(6) to be set 00:27:45.503 [2024-12-09 17:38:11.859895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.859967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.860236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.860275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.860531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.860562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.860735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.860766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.860935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.860967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.861187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.861220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.861476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.861508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.861623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.861654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.861775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.861806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.862061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.862093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.862261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.862295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.862480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.503 [2024-12-09 17:38:11.862511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.503 qpair failed and we were unable to recover it. 00:27:45.503 [2024-12-09 17:38:11.862622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.862653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.862797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.862830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.863001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.863032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.863225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.863258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.863523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.863555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.863690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.863721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.863839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.863871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.864132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.864164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.864441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.864473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.864733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.864764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.864959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.864991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.865253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.865286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.865495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.865526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.865701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.865732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.865916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.865954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.866199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.866232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.866403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.866435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.866685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.866716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.866955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.866987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.867279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.867312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.867494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.867526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.867718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.867750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.867946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.867977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.868239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.868272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.868521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.868554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.868789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.868821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.869012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.869044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.869177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.869210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.869351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.869384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.869617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.869649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.869782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.869815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.869927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.869959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.870203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.870236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.870500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.870532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.870721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.870752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.870925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.870957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.871205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.871239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.871502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.871534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.504 [2024-12-09 17:38:11.871704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.504 [2024-12-09 17:38:11.871736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.504 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.871874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.871905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.872098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.872130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.872285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.872320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.872451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.872483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.872667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.872700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.872937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.872969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.873160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.873202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.873378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.873410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.873633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.873664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.873847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.873878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.874064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.874096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.874302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.874335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.874453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.874484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.874614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.874644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.874775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.874806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.874995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.875032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.875219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.875253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.875392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.875423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.875685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.875715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.875900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.875932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.876103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.876134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.876252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.876285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.876477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.876509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.876764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.876796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.876915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.876947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.877126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.877158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.877363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.877396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.877512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.877544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.877638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.877670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.877940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.877973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.878096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.878128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.878328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.878361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.878475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.878507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.878686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.878718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.878904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.505 [2024-12-09 17:38:11.878936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.505 qpair failed and we were unable to recover it. 00:27:45.505 [2024-12-09 17:38:11.879186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.879219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.879400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.879432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.879646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.879677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.879913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.879945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.880059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.880091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.880334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.880368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.880505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.880537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.880657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.880689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.880856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.880888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.881058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.881089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.881256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.881289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.881475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.881508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.881708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.881739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.881973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.882005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.882197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.882230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.882346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.882377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.882549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.882582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.882753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.882783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.882955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.882987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.883228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.883261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.883452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.883489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.883613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.883645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.883825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.883858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.884122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.884153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.884404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.884436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.884642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.884674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.884866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.884898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.885106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.885138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.885360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.885393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.885642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.885674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.885882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.885914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.886188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.886222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.886399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.886431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.886622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.886653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.886851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.886883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.887000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.887030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.887205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.887239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.887382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.887413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.887513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.506 [2024-12-09 17:38:11.887545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.506 qpair failed and we were unable to recover it. 00:27:45.506 [2024-12-09 17:38:11.887790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.887822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.888088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.888121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.888379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.888415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.888545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.888577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.888754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.888786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.888980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.889013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.889204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.889237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.889428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.889461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.889635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.889668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.889801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.889834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.889956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.889990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.890217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.890253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.890509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.890542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.890658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.890691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.890882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.890914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.891042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.891075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.891256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.891290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.891427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.891459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.891572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.891604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.891839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.891871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.891998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.892029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.892210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.892254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.892433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.892465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.892582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.892613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.892810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.892841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.893011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.893043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.893248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.893282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.893466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.893499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.893601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.893633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.893934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.893966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.894178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.894212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.894330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.894362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.894552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.894585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.894836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.894867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.895149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.895194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.895312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.895344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.895529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.895561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.895730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.895764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.895986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.507 [2024-12-09 17:38:11.896017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.507 qpair failed and we were unable to recover it. 00:27:45.507 [2024-12-09 17:38:11.896214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.896248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.896422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.896454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.896642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.896673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.896908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.896939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.897109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.897140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.897284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.897318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.897448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.897480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.897649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.897680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.897805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.897836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.898031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.898064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.898247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.898282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.898402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.898433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.898559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.898590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.898827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.898859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.898980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.899013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.899275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.899309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.899428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.899459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.899583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.899615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.899734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.899766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.900052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.900084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.900269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.900304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.900414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.900447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.900573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.900610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.900718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.900751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.901014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.901045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.901243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.901277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.901534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.901565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.901753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.901785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.901973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.902004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.902233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.902268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.902456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.902487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.902671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.902702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.902810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.902842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.903017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.903049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.903253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.903287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.903411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.903443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.903660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.903693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.903863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.903896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.904215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.508 [2024-12-09 17:38:11.904248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.508 qpair failed and we were unable to recover it. 00:27:45.508 [2024-12-09 17:38:11.904371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.904403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.904649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.904681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.904961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.904993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.905159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.905202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.905389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.905420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.905545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.905577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.905749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.905781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.906042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.906072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.906241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.906281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.906400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.906432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.906613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.906684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.906844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.906916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.907121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.907157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.907460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.907494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.907670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.907700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.907808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.907841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.908029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.908061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.908199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.908232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.908352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.908384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.908587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.908618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.908795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.908826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.909048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.909080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.909320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.909353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.909539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.909580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.909770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.909802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.910015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.910047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.910157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.910203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.910440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.910472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.910644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.910676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.910873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.910905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.911091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.911123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.911425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.911458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.911572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.911604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.911863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.911895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.912023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.912054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.912249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.912283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.912474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.912506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.912796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.912829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.913022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.509 [2024-12-09 17:38:11.913054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.509 qpair failed and we were unable to recover it. 00:27:45.509 [2024-12-09 17:38:11.913186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.913220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.913337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.913368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.913480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.913512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.913702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.913735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.913912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.913943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.914059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.914092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.914342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.914375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.914498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.914530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.914712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.914745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.915010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.915043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.915234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.915267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.915488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.915558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.915701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.915737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.915944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.915976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.916149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.916193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.916436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.916469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.916585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.916616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.916855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.916886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.917090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.917121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.917313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.917347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.917463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.917495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.917697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.917728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.917919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.917951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.918077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.918108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.918302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.918345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.918540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.918571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.918776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.918810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.919014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.919044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.919228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.919263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.919437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.919469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.919642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.919673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.919854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.919886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.920071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.920102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.920367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.920399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.510 [2024-12-09 17:38:11.920652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.510 [2024-12-09 17:38:11.920685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.510 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.920863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.920894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.921121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.921153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.921355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.921388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.921564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.921597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.921780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.921813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.922001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.922033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.922281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.922315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.922504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.922536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.922655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.922687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.922869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.922900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.923075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.923107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.923315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.923348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.923584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.923615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.923728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.923760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.923956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.923987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.924251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.924284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.924414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.924448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.924657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.924690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.924865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.924897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.925074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.925106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.925396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.925428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.925546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.925577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.925715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.925747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.925874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.925906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.926084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.926115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.926310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.926343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.926512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.926543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.926647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.926679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.926789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.926821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.927109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.927148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.927268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.927301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.927553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.927586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.927835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.927866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.928037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.928069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.928201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.928235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.928424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.928457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.928627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.928658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.928838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.928871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.511 [2024-12-09 17:38:11.928981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.511 [2024-12-09 17:38:11.929011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.511 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.929203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.929236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.929424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.929457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.929693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.929725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.929845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.929877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.930076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.930108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.930366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.930399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.930568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.930601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.930772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.930803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.930982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.931014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.931189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.931221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.931346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.931378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.931558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.931590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.931710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.931742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.931852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.931883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.932144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.932202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.932409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.932440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.932609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.932641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.932881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.932913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.933159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.933202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.933348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.933381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.933549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.933581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.933775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.933807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.933993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.934025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.934220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.934254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.934441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.934472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.934642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.934674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.934848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.934880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.935030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.935061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.935243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.935276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.935462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.935495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.935693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.935725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.935934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.935967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.936138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.936177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.936370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.936402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.936572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.936604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.936869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.936902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.937071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.937103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.937221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.937253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.937425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.512 [2024-12-09 17:38:11.937456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.512 qpair failed and we were unable to recover it. 00:27:45.512 [2024-12-09 17:38:11.937594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.937625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.937849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.937881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.938064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.938095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.938280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.938314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.938490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.938521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.938715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.938748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.938935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.938967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.939203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.939237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.939475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.939507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.939690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.939723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.939842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.939874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.939972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.940004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.940276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.940310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.940497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.940530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.940768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.940800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.940972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.941004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.941181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.941214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.941449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.941481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.941667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.941705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.941836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.941868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.942135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.942176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.942328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.942362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.942551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.942583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.942708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.942740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.942910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.942950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.943133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.943176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.943381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.943414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.943526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.943558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.943668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.943700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.943887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.943918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.944082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.944115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.944305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.944338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.944588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.944620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.944818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.944851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.945036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.945067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.945236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.945270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.945406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.945438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.945625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.945657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.945824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.513 [2024-12-09 17:38:11.945856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.513 qpair failed and we were unable to recover it. 00:27:45.513 [2024-12-09 17:38:11.946078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.946110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.946292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.946326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.946502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.946534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.946704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.946736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.946906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.946938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.947185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.947218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.947410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.947442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.947572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.947605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.947786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.947818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.948002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.948034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.948149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.948188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.948427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.948458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.948626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.948656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.948859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.948890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.949060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.949091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.949242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.949275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.949515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.949547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.949783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.949814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.950078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.950109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.950306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.950345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.950581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.950612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.950812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.950843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.951033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.951064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.951245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.951278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.951460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.951491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.951665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.951696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.951958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.951988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.952195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.952226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.952358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.952390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.952566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.952597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.952771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.952801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.952977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.953008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.953223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.953256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.953373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.953403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.514 qpair failed and we were unable to recover it. 00:27:45.514 [2024-12-09 17:38:11.953596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.514 [2024-12-09 17:38:11.953626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.953761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.953791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.954028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.954059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.954183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.954216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.954382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.954413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.954511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.954540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.954738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.954769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.954882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.954913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.955088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.955119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.955309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.955342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.955472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.955502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.955607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.955638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.955791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.955822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.956062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.956093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.956267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.956300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.956471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.956501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.956686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.956717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.956891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.956921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.957157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.957196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.957391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.957421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.957542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.957572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.957784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.957814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.957944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.957974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.958174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.958206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.958324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.958355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.958477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.958513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.958776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.958807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.958930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.958961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.959148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.959187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.959306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.959337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.959510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.959540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.959785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.959815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.960051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.960082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.960218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.960254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.960374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.960404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.960597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.960627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.960888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.960919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.961099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.961129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.961320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.961352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.515 qpair failed and we were unable to recover it. 00:27:45.515 [2024-12-09 17:38:11.961616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.515 [2024-12-09 17:38:11.961647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.961872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.961902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.962175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.962207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.962380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.962412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.962610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.962641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.962815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.962846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.962952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.962983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.963102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.963132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.963403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.963435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.963688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.963719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.963991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.964021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.964211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.964244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.964371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.964402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.964691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.964723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.964965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.964996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.965176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.965207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.965377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.965408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.965589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.965621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.965804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.965835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.966107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.966138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.966267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.966299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.966435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.966465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.966649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.966681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.966918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.966950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.967131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.967161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.967383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.967415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.967519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.967557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.967728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.967760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.967883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.967914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.968112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.968144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.968440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.968472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.968588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.968620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.968881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.968912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.969185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.969217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.969319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.969351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.969586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.969616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.969786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.969817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.970036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.970067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.970253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.970287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.516 [2024-12-09 17:38:11.970466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.516 [2024-12-09 17:38:11.970497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.516 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.970748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.970778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.970916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.970947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.971115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.971146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.971341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.971372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.971539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.971571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.971746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.971777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.971951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.971981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.972113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.972144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.972264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.972296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.972471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.972502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.972689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.972721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.972902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.972934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.973103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.973135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.973345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.973377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.973639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.973670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.973808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.973838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.974025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.974056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.974298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.974331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.974505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.974536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.974729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.974760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.974953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.974984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.975161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.975201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.975372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.975403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.975584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.975615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.975826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.975858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.975995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.976027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.976294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.976333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.976587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.976618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.976787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.976818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.976988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.977018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.977134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.977173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.977366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.977397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.977587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.977618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.977736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.977767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.978030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.978060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.978236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.978268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.978453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.978485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.978661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.978691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.517 [2024-12-09 17:38:11.978873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-12-09 17:38:11.978903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.517 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.979026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.979056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.979303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.979336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.979564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.979595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.979767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.979797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.979911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.979941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.980203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.980236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.980471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.980502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.980671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.980700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.980883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.980913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.981121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.981151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.981296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.981328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.981499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.981530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.981650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.981680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.981873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.981904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.982099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.982131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.982312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.982345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.982524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.982555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.982671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.982702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.982898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.982929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.983033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.983063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.983254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.983287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.983455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.983486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.983607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.983637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.983833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.983863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.984116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.984145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.984344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.984376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.984596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.984626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.984741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.984777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.984987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.985018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.985256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.985288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.985411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.985441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.985705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.985736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.985851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.985881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.986052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.986082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.986249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.986281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.986471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.986503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.518 qpair failed and we were unable to recover it. 00:27:45.518 [2024-12-09 17:38:11.986690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-12-09 17:38:11.986721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.986920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.986950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.987133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.987164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.987287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.987318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.987492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.987523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.987789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.987820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.987934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.987964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.988135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.988177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.988365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.988396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.988630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.988660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.988898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.988928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.989056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.989087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.989270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.989303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.989548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.989578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.989752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.989783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.989952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.989983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.990155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.990195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.990451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.990481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.990598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.990629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.990891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.990922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.991041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.991071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.991366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.991400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.991523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.991554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.991734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.991764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.991883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.991914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.992081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.992112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.992354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.992386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.992517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.992548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.992716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.992747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.992946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.992977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.993102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.993132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.993320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.993359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.993624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.993655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.993780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.993810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.993998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.994028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.994215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.994253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.519 [2024-12-09 17:38:11.994436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-12-09 17:38:11.994467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.519 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.994641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.994671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.994788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.994818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.995032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.995063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.995164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.995211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.995410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.995441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.995610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.995641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.995747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.995777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.996038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.996069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.996227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.996259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.996445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.996477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.996665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.996695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.996822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.996852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.997111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.997142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.997319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.997350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.997539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.997570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.997755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.997786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.997996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.998026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.998194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.998227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.998345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.998376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.998550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.998581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.998710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.998741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.998969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.999041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.999259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.999296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.999482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.999514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.999686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.999718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:11.999906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:11.999937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.000189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.000223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.000405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.000437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.000697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.000729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.000989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.001020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.001233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.001266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.001522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.001553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.001795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.001827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.002039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.002072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.002277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.002319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.002504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.002536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.002770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.002801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.002993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.003023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.520 [2024-12-09 17:38:12.003225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.520 [2024-12-09 17:38:12.003258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.520 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.003428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.003459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.003736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.003768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.003960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.003991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.004185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.004218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.004492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.004524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.004703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.004735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.004920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.004952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.005153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.005200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.005319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.005351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.005564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.005596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.005834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.005864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.006053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.006084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.006291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.006324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.006514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.006545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.006721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.006753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.006936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.006967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.007145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-12-09 17:38:12.007185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-12-09 17:38:12.007379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.007411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.007605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.007637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.007747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.007779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.008027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.008058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.008295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.008328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.008564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.008636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.008902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.008938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.009131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.009163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.009448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.009480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.009594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.009626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.009745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.009777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.009907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.009938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.010052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.010084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.010324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.010358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.010559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.010591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.010709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.010740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.010927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.010959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.011255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.011289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.011492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.011524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.011720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.011751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.011950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.011982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.012188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.012222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.012397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.012428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.012634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.012665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.012853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.012884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.013057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.013088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.013324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.013358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.013538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.013570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.013695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.013727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.013844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.013874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.014043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.014074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.014283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.014316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.014428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.014467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.014574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.014604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.014735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.014768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.014883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.014914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.015095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.015125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.015322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.015355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.015559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-12-09 17:38:12.015589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-12-09 17:38:12.015708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.015740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.015853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.015885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.016075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.016107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.016289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.016322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.016500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.016531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.016731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.016762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.016941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.016972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.017260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.017294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.017413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.017443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.017570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.017601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.017781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.017813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.017992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.018022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.018217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.018249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.018350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.018382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.018575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.018606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.018818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.018849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.019130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.019161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.019354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.019386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.019630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.019660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.019778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.019809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.019923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.019959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.020087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.020118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.020405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.020439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.020574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.020604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.020729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.020759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.020926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.020957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.021129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.021161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.021342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.021373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.021499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.021530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.021650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.021682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.021864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.021894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.022074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.022104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.022304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.022338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.022523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.022555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.022683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.022716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.022972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.023002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.023239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.023272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.023385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.023416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.023627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.023657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.023888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-12-09 17:38:12.023920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-12-09 17:38:12.024048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.024080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.024344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.024378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.024627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.024659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.024929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.024960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.025080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.025111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.025225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.025257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.025494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.025525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.025775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.025814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.026009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.026040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.026224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.026257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.026500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.026532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.026733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.026765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.026932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.026963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.027149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.027190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.027460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.027491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.027679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.027709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.027905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.027936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.028147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.028197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.028389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.028420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.028667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.028698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.028889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.028919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.029186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.029219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.029422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.029452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.029630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.029661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.029923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.029954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.030127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.030158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.030410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.030442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.030776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.030807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.030989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.031020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.031203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.031236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.031496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.031528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.031716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.031747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.032004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.032035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.032225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.032258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.032475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.032507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.032689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.032721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.032952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.032983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.033176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.033209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.033387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.033418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-12-09 17:38:12.033551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-12-09 17:38:12.033582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.033821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.033853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.034104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.034134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.034266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.034299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.034421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.034452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.034642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.034673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.034923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.034954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.035085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.035116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.035365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.035397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.035639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.035710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.035974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.036009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.036190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.036225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.036347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.036380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.036616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.036648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.036891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.036922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.037042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.037072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.037257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.037291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.037469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.037500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.037690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.037721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.037885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.037915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.038100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.038131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.038382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.038415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.038601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.038643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.038925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.038956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.039079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.039110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.039310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.039343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.039514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.039546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.039712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.039743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.039929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.039960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.040132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.040163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.040448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.040480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.040595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.040626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.040794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.040826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.041017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.041048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.041306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.041339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-12-09 17:38:12.041519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-12-09 17:38:12.041551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.041728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.041760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.041936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.041968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.042153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.042194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.042370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.042401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.042516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.042548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.042682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.042713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.042843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.042875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.043064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.043094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.043283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.043315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.043488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.043519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.043690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.043721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.043968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.043999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.044239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.044273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.044461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.044494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.044681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.044713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.044815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.044847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.045024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.045054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.045193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.045225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.045347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.045378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.045502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.045534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.045740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.045771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.046012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.046043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.046218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.046252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.046485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.046517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.046622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.046653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.046753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.046785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.046973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.047011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.047240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.047274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.047410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.047442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.047629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.047660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.047788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.047819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.047936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.047968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.048164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.048208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.048448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.048481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.048670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.048701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.048882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.048914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.049027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.049059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-12-09 17:38:12.049190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-12-09 17:38:12.049224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.049462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.049494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.049628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.049660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.049921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.049952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.050071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.050102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.050239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.050272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.050466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.050497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.050631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.050665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.050796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.050828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.051014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.051045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.051224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.051257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.051434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.051466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.051650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.051682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.051871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.051904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.052162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.052218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.052368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.052401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.052514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.052552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.052659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.052690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.052891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.052922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.053182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.053215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.053318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.053351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.053523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.053555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.053815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.053846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.054058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.054090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.054262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.054295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.054467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.054498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.054606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.054638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.054827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.054858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.055031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.055063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.055247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.055280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.055525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.055557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.055726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.055759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.056000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.056031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.056203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.056235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.056352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.056384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.056484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.056516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.056694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.056726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.056903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.056934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.057050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.057082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-12-09 17:38:12.057208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-12-09 17:38:12.057240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.057372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.057405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.057577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.057609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.057831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.057864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.057999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.058032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.058141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.058181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.058308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.058340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.058535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.058566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.058667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.058698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.058812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.058844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.058959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.058991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.059188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.059221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.059399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.059431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.064194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.064258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.064554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.064595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.064865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.064904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.065087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.065121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.065378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.065420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.065546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.065577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.065706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.065738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.065869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.065899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.066139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.066184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.066334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.066367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.066616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.066649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.066892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.066925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.067163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.067196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.067312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.067337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.067437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.067461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.067563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.067589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.067757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.067786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.069184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.069219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.069511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.069537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.069742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.069768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.069970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.069995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.070200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.070226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.070395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.070419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.070555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.070579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.070747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.070771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.070906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.070931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-12-09 17:38:12.071069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-12-09 17:38:12.071093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.071198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.071223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.071358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.071381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.071547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.071571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.071669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.071693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.074304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.074343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.074607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.074636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.074865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.074895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.075096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.075122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.075368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.075397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.075570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.075596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.075764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.075791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.075891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.075915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.076046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.076071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.076230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.076256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.076412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.076437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.076605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.076629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.076725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.076749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.076849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.076879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.077067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.077091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.077203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.077229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.077385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.077402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.077498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.077516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.077616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.077634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.077727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.077742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.077835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.077853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.077994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.078011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.078186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.078204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.078364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.078381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.078482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.078499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.078594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.078612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.078757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.078774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.078858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.078874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.078959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.078975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.079123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.079140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.079241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.079258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.079340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.079357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.079432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.079449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.079556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.079573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.079665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-12-09 17:38:12.079681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-12-09 17:38:12.079759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.079775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.079926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.079944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.080200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.080218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.080369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.080386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.080467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.080482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.080626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.080643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.080738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.080755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.080989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.081006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.081112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.081129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.081203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.081219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.081294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.081310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.081384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.081400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.081475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.081492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.081587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.081602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.081694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.081710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.081850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.081868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.081961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.081977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.082064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.082080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.082162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.082187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.082262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.082278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.082421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.082437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.082521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.082537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.082626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.082642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.082734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.082750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.082823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.082839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.082980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.082996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.083077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.083094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.083164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.083187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.083272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.083288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.083364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.083381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.083453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.083471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.083575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.083591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.083802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.083820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.083960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.083977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-12-09 17:38:12.084053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-12-09 17:38:12.084070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.084141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.084157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.084273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.084290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.084383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.084399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.084492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.084508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.084587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.084603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.084686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.084701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.084777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.084793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.084873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.084891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.084968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.084984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.085088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.085104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.085196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.085213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.085371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.085389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.085475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.085492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.085600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.085617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.085712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.085730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.085804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.085822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.085891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.085909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.085991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.086007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.086087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.086103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.086177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.086194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.086262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.086279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.086363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.086379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.086518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.086534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.086697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.086718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.086806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.086822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.086910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.086926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.087069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.087086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.087162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.087184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.087271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.087288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.087428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.087446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.087521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.087540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.087612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.087631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.087707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.087726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.087829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.087849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.087928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.087947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.088035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.088054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.088139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.088159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.088278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.088297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-12-09 17:38:12.088455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-12-09 17:38:12.088475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.088560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.088579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.088665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.088684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.088780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.088798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.088942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.088961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.089068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.089087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.089255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.089276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.089499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.089530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.089644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.089676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.089842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.089873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.089988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.090019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.090204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.090237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.090360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.090392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.090504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.090540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.090613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.090632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.090711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.090729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.090807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.090826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.090904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.090922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.091018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.091037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.091210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.091230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.091317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.091336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.091420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.091438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.091547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.091566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.091722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.091741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.091894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.091925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.092101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.092138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.092435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.092507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.092657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.092693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.092810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.092842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.092973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.093005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.093131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.093162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.093459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.093492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.093604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.093626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.093780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.093800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.093879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.093898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.093990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.094009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.094186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.094206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.094312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.094332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.094480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.094499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-12-09 17:38:12.094592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-12-09 17:38:12.094611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.094776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.094796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.094899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.094918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.095057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.095076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.095171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.095190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.095349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.095369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.095463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.095481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.095626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.095645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.095737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.095755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.095908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.095927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.096012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.096031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.096124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.096143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.096244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.096263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.096346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.096365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.096443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.096462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.096674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.096693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.096777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.096796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.096895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.096914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.097008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.097027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.097109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.097128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.097217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.097251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.097413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.097438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.097526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.097551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.097655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.097679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.097787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.097812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.097966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.097990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.098086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.098115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.098219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.098245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.098407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.098438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.098542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.098573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.098698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.098729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.098856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.098888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.098997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.099021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.099128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.099152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.099246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.099271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.099364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.099388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.099487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.099511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.099675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.099700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.099859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.099883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.100038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-12-09 17:38:12.100063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-12-09 17:38:12.100153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.100217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.100389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.100414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.100510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.100535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.100651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.100675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.100829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.100871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.101000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.101031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.101154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.101195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.101310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.101342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.101450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.101474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.101562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.101586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.101750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.101774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.101869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.101893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.102006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.102030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.102135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.102160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.102352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.102377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.102463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.102487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.102591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.102615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.102778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.102803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.102892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.102916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.103024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.103049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.103232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.103257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.103412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.103436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.103659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.103691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.103875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.103906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.104077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.104110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.104218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.104243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.104395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.104424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.104579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.104603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.104826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.104864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.104986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.105017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.105151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.105231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.105406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.105439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.105618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.105642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.105748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.105772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.105928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-12-09 17:38:12.105952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-12-09 17:38:12.106109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.106135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.106414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.106438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.106534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.106558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.106749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.106773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.106946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.106970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.107092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.107116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.107281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.107307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.107470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.107511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.107677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.107708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.107831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.107863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.108000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.108032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.108144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.108175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.108282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.108309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.108465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.108491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.108656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.108682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.108789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.108815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.108921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.108947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.109039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.109065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.109297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.109368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.109497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.109534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.109656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.109693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.109807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.109838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.110027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.110059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.110230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.110265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.110473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.110505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.110628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.110658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.110782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.110813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.110936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.110966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.111067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.111097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.111219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.111253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.111372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.111406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.111539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.111569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.111748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.111779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.111946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.111977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.112096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.112127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.112331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.112363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.112560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.112590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.112759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.112790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.112908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-12-09 17:38:12.112940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-12-09 17:38:12.113115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.113146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.113338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.113369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.113479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.113509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.113626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.113657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.113778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.113808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.113927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.113960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.114069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.114099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.114265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.114292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.114470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.114496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.114590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.114615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.114732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.114773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.114884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.114916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.115093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.115123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.115305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.115337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.115473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.115499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.115612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.115638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.115743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.115769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.115934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.115959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.116058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.116084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.116192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.116219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.116428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.116455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.116549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.116574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.116696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.116722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.116835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.116861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.116963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.116989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.117156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.117191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.117371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.117396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.117620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.117646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.117774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.117805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.117925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.117955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.118059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.118090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.118301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.118333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.118450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.118480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.118706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.118737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.118840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.118871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.119060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.119090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.119263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.119296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.119403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.119434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.119567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-12-09 17:38:12.119598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-12-09 17:38:12.119769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.119800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.119907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.119937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.120059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.120090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.120290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.120323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.120511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.120542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.120663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.120693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.120803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.120834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.120952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.120989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.121106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.121136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.121333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.121365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.121537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.121567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.121665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.121695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.121820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.121852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.121964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.121994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.122108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.122139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.122270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.122302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.122470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.122501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.122628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.122659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.122773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.122803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.122920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.122951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.123067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.123098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.123245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.123277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.123448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.123479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.123597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.123627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.123731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.123763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.123886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.123917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.124044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.124075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.124275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.124308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.124413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.124443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.124577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.124608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.124729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.124760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.124881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.124912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.125016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.125047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.125163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.125203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.125322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.125353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.125484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.125515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.125630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.125661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.125831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.125862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.126062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-12-09 17:38:12.126093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-12-09 17:38:12.126237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.126270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.126448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.126478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.126596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.126627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.126740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.126770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.126889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.126921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.127040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.127072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.127245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.127278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.127447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.127478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.127597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.127634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.127801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.127833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.127952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.127983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.128099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.128130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.128257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.128289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.128413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.128444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.128565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.128597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.128709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.128740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.128843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.128875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.129044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.129076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.129247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.129280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.129383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.129415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.129528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.129558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.129728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.129760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.129945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.129976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.130148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.130186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.130311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.130342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.130524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.130555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.130670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.130701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.130814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.130845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.130964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.130996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.131112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.131143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.131369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.131401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.131584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.131614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.131794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.131825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.131928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.131958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.132139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.132179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.132450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.132482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.132611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.132643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.132744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.132774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.132907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-12-09 17:38:12.132938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-12-09 17:38:12.133040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.133071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.133185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.133217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.133331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.133363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.133620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.133651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.133777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.133808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.133914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.133944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.134136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.134177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.134286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.134322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.134425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.134455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.134572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.134608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.134791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.134821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.134925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.134956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.135086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.135116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.135259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.135292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.135422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.135454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.135563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.135594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.135762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.135793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.135978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.136009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.136248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.136282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.136473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.136505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.136632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.136663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.136772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.136802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.136931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.136962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.137143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.137184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.137289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.137320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.137489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.137520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.137629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.137661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.137780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.137810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.138001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.138033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.138133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.138164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.138298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.138330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.138518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.138549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.138728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.138762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.138884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.138913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-12-09 17:38:12.139037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-12-09 17:38:12.139067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.139208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.139240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.139366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.139397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.139529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.139561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.139678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.139709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.139897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.139928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.140040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.140071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.140194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.140227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.140401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.140432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.140632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.140664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.140986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.141017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.141192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.141226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.141334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.141366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.141482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.141515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.141628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.141659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.141847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.141890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.142000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.142032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.142144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.142182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.142300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.142332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.142438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.142469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.142579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.142611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.142727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.142759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.142941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.142974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.143085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.143117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.143226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.143259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.143448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.143479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.143716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.143748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.143849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.143880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.143995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.144027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.144209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.144243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.144345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.144376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.144494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.144526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.144647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.144678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.144788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.144820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.144921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.144952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.145066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.145097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.145209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.145243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.145354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.145386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.145623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.145655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-12-09 17:38:12.145843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-12-09 17:38:12.145874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.146042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.146073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.146196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.146229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.146350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.146381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.146551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.146584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.146698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.146729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.146989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.147020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.147124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.147156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.147306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.147339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.147476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.147507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.147624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.147655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.147774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.147805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.147922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.147953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.148065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.148094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.148200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.148230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.148333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.148361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.148468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.148502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.148691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.148720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.148883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.148912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.149020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.149048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.149156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.149194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.149364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.149393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.149491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.149520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.149632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.149660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.149773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.149802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.149912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.149941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.150050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.150078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.150195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.150245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.150430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.150459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.150563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.150591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.150771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.150801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.150966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.150994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.151103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.151133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.151249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.151280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.151450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.151478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.151579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.151608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.151717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.151746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.151849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.151877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.151980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.152008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.152125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-12-09 17:38:12.152155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-12-09 17:38:12.152279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.152307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.152515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.152543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.152650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.152678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.152789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.152819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.153006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.153034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.153149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.153188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.153289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.153317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.153497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.153526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.153646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.153673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.153841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.153870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.153978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.154007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.154105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.154133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.154240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.154270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.154370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.154397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.154495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.154522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.154630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.154658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.154756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.154790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.154885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.154913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.155084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.155113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.155254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.155283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.155378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.155407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.155525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.155553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.155662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.155690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.155791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.155820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.155929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.155957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.156213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.156243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.156361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.156390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.156554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.156583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.156699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.156726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.156840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.156868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.157049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.157079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.157318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.157348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.157443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.157471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.157572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.157601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.157715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.157743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.157913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.157942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.158041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.158070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.158186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.158218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.158314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.824 [2024-12-09 17:38:12.158342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.824 qpair failed and we were unable to recover it. 00:27:45.824 [2024-12-09 17:38:12.158508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.158534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.158627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.158655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.158765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.158791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.158980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.159007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.159113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.159139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.159254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.159281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.159398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.159425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.159588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.159616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.159778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.159804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.159964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.159990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.160081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.160107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.160227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.160254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.160439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.160466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.160583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.160610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.160709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.160735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.160914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.160940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.161100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.161126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.161243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.161276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.161383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.161408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.161518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.161544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.161639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.161664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.161756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.161782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.161893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.161919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.162037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.162064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.162223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.162251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.162420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.162447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.162547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.162574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.162686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.162711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.162802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.162829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.162928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.162955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.163114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.163140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.163299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.163327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.163421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.163447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.163546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.163571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.163819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.825 [2024-12-09 17:38:12.163846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.825 qpair failed and we were unable to recover it. 00:27:45.825 [2024-12-09 17:38:12.164011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.164037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.164137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.164163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.164269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.164295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.164391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.164417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.164513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.164539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.164708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.164734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.164830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.164857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.164998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.165023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.165125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.165151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.165327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.165399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.165538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.165575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.165747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.165779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.165887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.165919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.166095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.166127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.166340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.166375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.166551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.166582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.166760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.166792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.166913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.166944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.167066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.167097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.167218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.167245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.167344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.167369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.167480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.167507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.167602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.167633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.167738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.167764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.167970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.167996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.168107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.168133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.168262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.168295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.168403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.168434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.168607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.168638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.168741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.168773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.168888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.168920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.169121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.169153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.169268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.169300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.169413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.169443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.169548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.169580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.169702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.169733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.169846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.169878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.170053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.170084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.170192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.826 [2024-12-09 17:38:12.170224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.826 qpair failed and we were unable to recover it. 00:27:45.826 [2024-12-09 17:38:12.170402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.170432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.170558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.170589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.170703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.170734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.170844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.170875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.171054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.171085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.171214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.171245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.171349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.171381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.171572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.171603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.171802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.171833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.171948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.171979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.172144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.172230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.172362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.172398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.172533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.172566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.172674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.172707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.172893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.172924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.173089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.173121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.173253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.173286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.173460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.173491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.173660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.173692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.173871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.173903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.174012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.174044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.174164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.174209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.174321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.174353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.174540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.174582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.174705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.174737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.175020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.175052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.175241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.175274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.175445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.175477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.175606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.175637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.175758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.175790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.175963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.175995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.176112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.176144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.176321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.176354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.176625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.176657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.176773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.176805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.176983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.177015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.177118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.177150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.177283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.177316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.177437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.177470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.827 qpair failed and we were unable to recover it. 00:27:45.827 [2024-12-09 17:38:12.177590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.827 [2024-12-09 17:38:12.177622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.177832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.177863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.177979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.178010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.178113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.178147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.178283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.178317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.178432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.178464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.178779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.178813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.178995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.179026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.179145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.179190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.179368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.179400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.179516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.179549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.179707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.179777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.179913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.179952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.180176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.180211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.180330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.180362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.180469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.180501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.180672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.180703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.180812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.180843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.181015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.181046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.181231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.181264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.181372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.181403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.181590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.181621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.181747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.181779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.181887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.181918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.182039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.182081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.182192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.182224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.182337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.182367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.182469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.182499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.182599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.182629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.182736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.182766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.182976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.183007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.183137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.183177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.183303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.183334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.183511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.183541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.183648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.183680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.183804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.183835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.183954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.183984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.184089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.184120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.184352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.184385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.828 [2024-12-09 17:38:12.184504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.828 [2024-12-09 17:38:12.184535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.828 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.184657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.184688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.184795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.184826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.184937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.184967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.185144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.185185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.185429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.185461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.185568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.185601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.185727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.185757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.185930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.185961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.186139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.186180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.186388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.186420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.186535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.186567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.186704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.186745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.186861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.186892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.187019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.187051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.187235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.187269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.187442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.187475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.187586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.187617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.187798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.187831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.187995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.188027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.188202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.188236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.188405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.188436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.188562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.188594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.188707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.188737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.188906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.188936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.189069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.189101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.189238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.189272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.189387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.189417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.189519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.189549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.189670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.189701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.189871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.189904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.190145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.190189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.190366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.190399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.190517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.190549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.190678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.190708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.190820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.190851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.191029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.191059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.191178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.191210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.191317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.191347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.191445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.829 [2024-12-09 17:38:12.191482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.829 qpair failed and we were unable to recover it. 00:27:45.829 [2024-12-09 17:38:12.191581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.191611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.191794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.191825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.192020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.192057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.192243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.192275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.192395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.192426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.192528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.192557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.192662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.192692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.192812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.192843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.192954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.192987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.193109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.193139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.193337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.193375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.193502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.193537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.193656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.193687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.193817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.193848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.194057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.194090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.194205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.194237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.194352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.194383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.194568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.194600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.194783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.194814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.194921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.194952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.195074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.195105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.195299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.195331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.195447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.195478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.195592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.195624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.195737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.195769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.195890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.195921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.196033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.196069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.196193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.196227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.196405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.196437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.196556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.196588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.196697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.196730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.196914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.196947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.197204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.197238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.197360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.197392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.197521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.830 [2024-12-09 17:38:12.197553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.830 qpair failed and we were unable to recover it. 00:27:45.830 [2024-12-09 17:38:12.197732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.197764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.197961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.197992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.198243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.198278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.198384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.198416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.198656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.198695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.198818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.198850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.199024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.199057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.199187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.199221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.199331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.199363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.199495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.199526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.199696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.199729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.199848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.199879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.200102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.200134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.200263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.200297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.200483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.200515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.200690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.200723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.200845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.200878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.201049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.201080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.201213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.201248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.201371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.201404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.201643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.201675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.201796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.201828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.202001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.202034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.202225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.202258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.202396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.202429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.202569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.202601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.202772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.202805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.202925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.202957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.203160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.203203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.203330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.203363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.203548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.203579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.203699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.203732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.203908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.203940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.204043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.204075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.204209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.204242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.204364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.204396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.204576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.204608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.204728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.204760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.204958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.831 [2024-12-09 17:38:12.204990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.831 qpair failed and we were unable to recover it. 00:27:45.831 [2024-12-09 17:38:12.205198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.205232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.205341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.205373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.205489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.205521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.205715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.205747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.205938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.205971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.206087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.206126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.206377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.206411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.206598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.206630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.206744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.206775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.207005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.207037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.207209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.207242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.207353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.207385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.207511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.207543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.207780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.207812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.208050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.208084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.208207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.208240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.208426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.208458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.208650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.208684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.208876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.208907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.209088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.209121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.209311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.209344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.209458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.209490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.209611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.209642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.209763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.209795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.209900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.209932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.210047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.210078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.210188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.210223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.210412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.210444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.210638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.210670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.210787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.210820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.211011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.211042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.211158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.211202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.211438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.211508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.211710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.211745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.211869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.211902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.212086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.212118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.212331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.212364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.212501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.212532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.212647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.212679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.832 qpair failed and we were unable to recover it. 00:27:45.832 [2024-12-09 17:38:12.212879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.832 [2024-12-09 17:38:12.212912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.213102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.213135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.213331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.213367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.213484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.213516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.213689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.213723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.213926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.213957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.214063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.214096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.214317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.214350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.214532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.214563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.214740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.214772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.214880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.214912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.215017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.215050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.215177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.215210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.215327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.215359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.215500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.215534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.215731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.215763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.215885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.215917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.216037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.216068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.216248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.216282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.216393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.216425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.216544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.216576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.216696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.216728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.216845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.216877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.217009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.217042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.217225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.217259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.217392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.217424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.217540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.217571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.217741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.217772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.217948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.217980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.218105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.218138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.218325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.218358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.218461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.218492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.218614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.218646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.218770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.218808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.218913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.218945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.219127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.219159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.219362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.219395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.219588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.219619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.219817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.219849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.833 [2024-12-09 17:38:12.220022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.833 [2024-12-09 17:38:12.220053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.833 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.220195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.220230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.220468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.220500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.220649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.220680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.220853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.220885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.221084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.221116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.221362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.221394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.221589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.221621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.221744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.221797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.221981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.222013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.222147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.222191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.222434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.222467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.222644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.222676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.222789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.222821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.223009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.223041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.223152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.223195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.223297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.223329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.223498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.223530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.223745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.223776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.223900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.223932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.224108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.224140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.224416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.224488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.224695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.224732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.224843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.224875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.224999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.225030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.225147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.225192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.225312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.225343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.225535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.225566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.225747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.225780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.225954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.225985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.226105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.226137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.226249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.226282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.226387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.226418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.226516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.226548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.226669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.226712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.834 [2024-12-09 17:38:12.226921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.834 [2024-12-09 17:38:12.226951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.834 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.227177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.227212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.227403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.227435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.227553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.227584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.227700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.227730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.227849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.227880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.228004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.228035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.228181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.228214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.228337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.228369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.228485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.228516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.228630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.228662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.228857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.228889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.229070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.229101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.229234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.229268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.229442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.229473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.229659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.229691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.229866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.229899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.230095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.230126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.230258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.230292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.230400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.230432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.230616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.230646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.230777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.230809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.231002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.231034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.231138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.231181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.231361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.231393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.231506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.231538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.231770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.231840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.231974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.232011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.232210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.232245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.232442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.232474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.232665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.232698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.232879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.232912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.233103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.233134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.233263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.233296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.233466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.233498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.233609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.233640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.233826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.233858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.233970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.234001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.234108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.234139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.835 [2024-12-09 17:38:12.234259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.835 [2024-12-09 17:38:12.234301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.835 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.234482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.234515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.234651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.234683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.234920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.234951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.235072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.235103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.235341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.235373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.235566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.235597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.235781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.235813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.235922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.235954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.236085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.236117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.236301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.236334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.236450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.236482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.236606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.236639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.236874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.236904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.237018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.237050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.237219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.237253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.237376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.237407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.237604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.237635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.237741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.237773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.237894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.237926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.238114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.238145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.238258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.238290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.238410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.238441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.238569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.238600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.238766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.238797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.238900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.238931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.239044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.239076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.239380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.239452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.239701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.239772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.239966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.240003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.240207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.240242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.240424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.240456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.240576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.240607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.240732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.240763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.240870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.240902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.241023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.241055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.241245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.241279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.241464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.241495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.241682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.241714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.241821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.836 [2024-12-09 17:38:12.241853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.836 qpair failed and we were unable to recover it. 00:27:45.836 [2024-12-09 17:38:12.241977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.242018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.242210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.242243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.242427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.242458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.242570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.242602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.242784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.242818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.242996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.243028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.243186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.243219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.243330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.243362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.243502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.243533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.243653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.243686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.243799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.243831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.243946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.243977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.244101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.244133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.244280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.244314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.244433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.244464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.244583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.244615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.244738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.244771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.244885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.244916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.245024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.245057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.245158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.245202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.245314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.245346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.245460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.245491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.245620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.245651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.245774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.245805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.245922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.245954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.246070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.246102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.246214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.246248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.246369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.246410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.246590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.246623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.246742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.246774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.246945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.246977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.247091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.247123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.247321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.247356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.247480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.247512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.247628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.247658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.247859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.247892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.247999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.248032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.248156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.248200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.248393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.837 [2024-12-09 17:38:12.248425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.837 qpair failed and we were unable to recover it. 00:27:45.837 [2024-12-09 17:38:12.248612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.248644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.248819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.248851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.249061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.249093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.249214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.249246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.249419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.249450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.249642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.249674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.249780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.249811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.249983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.250012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.250124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.250157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.250344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.250376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.250511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.250542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.250713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.250744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.250848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.250880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.251050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.251081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.251210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.251242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.251412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.251450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.251556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.251587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.251761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.251791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.251907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.251938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.252124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.252156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.252355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.252386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.252497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.252528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.252766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.252797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.252911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.252945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.253118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.253149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.253272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.253304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.253476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.253509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.253633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.253663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.253841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.253872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.254001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.254033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.254238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.254285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.254493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.254525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.254711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.254743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.254912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.254943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.255077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.255107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.838 [2024-12-09 17:38:12.255288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.838 [2024-12-09 17:38:12.255322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.838 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.255442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.255474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.255672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.255703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.255810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.255841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.256018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.256050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.256150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.256192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.256326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.256358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.256491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.256529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.256706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.256737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.256910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.256942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.257049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.257080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.257196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.257228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.257359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.257390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.257504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.257535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.257655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.257686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.257949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.257980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.258082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.258113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.258294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.258325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.258431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.258461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.258637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.258668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.258776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.258806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.258931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.258962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.259144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.259187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.259318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.259349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.259469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.259499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.259625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.259658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.259834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.259864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.260101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.260132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.260320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.260352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.260457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.260487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.260615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.260644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.260829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.260860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.261054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.261084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.261198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.261233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.261410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.261449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.261629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.261659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.261786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.261817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.261945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.261977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.262085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.262115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.262249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.839 [2024-12-09 17:38:12.262283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.839 qpair failed and we were unable to recover it. 00:27:45.839 [2024-12-09 17:38:12.262458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.262489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.262672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.262703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.262824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.262854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.263035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.263066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.263193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.263225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.263339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.263369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.263553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.263586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.263702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.263732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.263947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.264019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.264227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.264265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.264512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.264546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.264718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.264749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.264930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.264961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.265143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.265183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.265372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.265403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.265506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.265538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.265660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.265692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.265868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.265900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.266072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.266103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.266223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.266256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.266371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.266403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.266581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.266618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.266781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.266814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.267051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.267082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.267201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.267236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.267338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.267369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.267488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.267519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.267645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.267677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.267851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.267883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.268089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.268120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.268312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.268345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.268532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.268563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.268803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.268836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.269022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.269053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.269157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.269202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.269398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.269431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.269615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.269646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.269771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.269803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.269985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.270017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.840 qpair failed and we were unable to recover it. 00:27:45.840 [2024-12-09 17:38:12.270203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.840 [2024-12-09 17:38:12.270236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.270347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.270379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.270552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.270584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.270754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.270785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.270945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.270976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.271104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.271136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.271387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.271457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.271619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.271654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.271834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.271866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.272010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.272045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.272229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.272262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.272440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.272471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.272590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.272622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.272741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.272773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.272875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.272906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.273014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.273045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.273185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.273219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.273440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.273472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.273602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.273634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.273836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.273867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.274106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.274137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.274266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.274303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.274415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.274456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.274622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.274655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.274787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.274820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.274993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.275026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.275204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.275237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.275357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.275389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.275566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.275597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.275719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.275751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.275919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.275951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.276074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.276106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.276213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.276247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.276428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.276461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.276630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.276662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.276787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.276820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.276941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.276974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.277094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.277126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.277349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.277383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.841 qpair failed and we were unable to recover it. 00:27:45.841 [2024-12-09 17:38:12.277585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.841 [2024-12-09 17:38:12.277618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.277790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.277822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.277999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.278030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.278153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.278196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.278376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.278408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.278543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.278575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.278703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.278736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.278982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.279014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.279132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.279164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.279384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.279416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.279641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.279711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.279853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.279888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.280101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.280133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.280274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.280307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.280509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.280541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.280738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.280768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.280957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.280989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.281110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.281142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.281294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.281325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.281448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.281480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.281653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.281684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.281866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.281896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.282008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.282039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.282230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.282272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.282389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.282420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.282537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.282569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.282818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.282848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.282951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.282982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.283152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.283197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.283498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.283530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.283710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.283742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.285161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.285227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.285515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.285548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.285727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.285761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.285880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.285911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.286180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.286213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.286428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.286462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.286596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.286629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.286811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.286850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.842 [2024-12-09 17:38:12.287085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.842 [2024-12-09 17:38:12.287118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.842 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.287356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.287391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.287629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.287661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.287796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.287827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.288021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.288053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.288194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.288227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.288333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.288365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.288542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.288573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.288754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.288786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.288959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.288992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.289120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.289152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.289279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.289313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.289446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.289477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.289687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.289719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.289897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.289928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.290039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.290071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.290248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.290282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.290460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.290491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.290688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.290721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.290922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.290954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.291146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.291188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.291310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.291341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.291442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.291474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.291600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.291631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.291865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.291905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.292143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.292193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.292382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.292414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.292601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.292632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.292735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.292766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.292954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.292985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.293101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.293133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.293305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.293337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.293447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.293479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.293595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.293626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.293761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.293793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.843 [2024-12-09 17:38:12.294000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.843 [2024-12-09 17:38:12.294031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.843 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.294232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.294266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.294385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.294415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.294525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.294557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.294685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.294717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.294907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.294937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.295108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.295139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.295262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.295295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.295406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.295438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.295543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.295576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.295749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.295781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.295912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.295942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.296120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.296151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.296264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.296296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.296467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.296499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.296699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.296731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.296951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.297022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.297302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.297344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.297515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.297548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.297682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.297714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.297971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.298003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.298246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.298279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.298470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.298501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.298667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.298700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.298936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.298967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.299209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.299242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.299522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.299553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.299672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.299704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.299831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.299863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.300052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.300084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.300215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.300248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.300437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.300468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.300663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.300695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.300818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.300849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.300960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.300991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.301125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.301157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.301410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.301443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.301560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.301593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.301859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.301891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.844 [2024-12-09 17:38:12.302014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.844 [2024-12-09 17:38:12.302045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.844 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.302187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.302221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.302431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.302463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.302599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.302631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.302828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.302860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.302974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.303006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.303106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.303137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.303327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.303366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.303482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.303514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.303717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.303749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.303922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.303953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.304075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.304107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.304299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.304332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.304469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.304500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.304693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.304724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.304828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.304860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.305095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.305126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.305269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.305308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.305435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.305466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.305635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.305667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.305806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.305838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.306121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.306152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.306354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.306386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.306493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.306525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.306653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.306684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.306856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.306888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.307005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.307036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.307156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.307202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.307338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.307370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.307658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.307690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.307827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.307858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.308031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.308063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.308200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.308234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.308353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.308384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.308507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.308539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.308664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.308696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.308797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.308827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.308946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.308978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.309150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.309190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.309387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.309419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.309551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.309583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.309690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.845 [2024-12-09 17:38:12.309721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.845 qpair failed and we were unable to recover it. 00:27:45.845 [2024-12-09 17:38:12.309837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.309867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.309977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.310008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.310226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.310259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.310374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.310405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.310537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.310568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.310701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.310732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.310843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.310874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.311066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.311098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.311280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.311313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.311438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.311470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.311582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.311614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.311736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.311767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.311883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.311915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.312013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.312044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.312164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.312231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.312405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.312443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.312563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.312596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.312719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.312749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.312883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.312914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.313013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.313045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.313168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.313225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.313352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.313383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.313566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.313598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.313770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.313803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.313908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.313940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.314194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.314227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.314402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.314433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.314717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.314749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.314850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.314881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.315072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.315104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.315224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.315256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.315373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.315404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.315595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.315625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.315743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.315774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.315887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.315919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.316021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.316052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.316220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.316253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.316388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.316420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.316523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.316554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.316656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.316688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.316799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.316831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.317001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.846 [2024-12-09 17:38:12.317032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.846 qpair failed and we were unable to recover it. 00:27:45.846 [2024-12-09 17:38:12.317221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.317254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.317372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.317403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.317579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.317611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.317783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.317813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.317934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.317965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.318142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.318193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.318298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.318329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.318467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.318499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.318665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.318697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.318802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.318833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.319010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.319042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.319179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.319211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.319378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.319410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.319602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.319640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:45.847 [2024-12-09 17:38:12.319772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.847 [2024-12-09 17:38:12.319805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:45.847 qpair failed and we were unable to recover it. 00:27:46.127 [2024-12-09 17:38:12.319917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-12-09 17:38:12.319949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-12-09 17:38:12.320114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-12-09 17:38:12.320146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-12-09 17:38:12.320263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-12-09 17:38:12.320296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-12-09 17:38:12.320465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-12-09 17:38:12.320496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-12-09 17:38:12.320739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-12-09 17:38:12.320770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-12-09 17:38:12.320891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-12-09 17:38:12.320922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-12-09 17:38:12.321104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-12-09 17:38:12.321135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-12-09 17:38:12.321325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-12-09 17:38:12.321358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-12-09 17:38:12.321463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-12-09 17:38:12.321494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-12-09 17:38:12.321606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-12-09 17:38:12.321638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-12-09 17:38:12.321752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.321783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.321921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.321952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.322193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.322227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.322487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.322517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.322642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.322674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.322792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.322823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.322935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.322966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.323088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.323119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.323363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.323395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.323552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.323583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.323701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.323732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.323899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.323930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.324037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.324068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.324296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.324329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.324519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.324549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.324742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.324774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.325001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.325032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.325200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.325233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.325419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.325450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.325551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.325582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.325823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.325853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.325981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.326013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.326129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.326159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.326378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.326411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.326588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.326618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.326800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.326832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.326953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.326984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.327090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.327121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.327259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.327303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.327476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.327508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.327624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.327655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.327832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.327869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.328065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.328096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.328209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.328242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.328365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.328395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.328591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.328623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.328807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.328838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.329012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.329044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.329310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-12-09 17:38:12.329342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-12-09 17:38:12.329457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.329488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.329692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.329724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.329857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.329887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.330009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.330041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.330237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.330270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.330382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.330413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.330532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.330563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.330666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.330697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.330932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.330963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.331095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.331126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.331372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.331405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.331510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.331541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.331715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.331745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.331863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.331894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.331997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.332028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.332208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.332241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.332440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.332471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.332590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.332621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.332806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.332837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.333011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.333043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.333216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.333249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.333425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.333456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.333573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.333603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.333789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.333821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.333944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.333975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.334237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.334270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.334391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.334421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.334534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.334565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.334733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.334764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.334951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.334987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.335118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.335149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.335333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.335365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.335548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.335578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.335689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.335720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.335840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.335871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.336055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.336086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.336187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.336220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.336395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.336425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.336650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-12-09 17:38:12.336682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-12-09 17:38:12.336922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.336952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.337189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.337221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.337333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.337364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.337477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.337509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.337639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.337670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.337913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.337943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.338078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.338109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.338287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.338320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.338490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.338520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.338647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.338678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.338803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.338834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.338962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.338993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.339107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.339140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.339328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.339359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.339487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.339518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.339687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.339719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.339889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.339920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.340045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.340077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.340218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.340252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.340422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.340455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.340625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.340656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.340927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.340958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.341062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.341094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.341213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.341246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.341378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.341409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.341524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.341556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.341662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.341693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.341862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.341894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.342021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.342052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.342163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.342203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.342306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.342343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.342511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.342542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.342645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.342676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.342847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.342879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.343049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.343080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.343196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.343232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.343406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.343437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.343539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.343570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.343686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-12-09 17:38:12.343717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-12-09 17:38:12.343836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.343868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.344002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.344033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.344147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.344190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.344364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.344395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.344585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.344616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.344724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.344755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.344943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.344975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.345083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.345114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.345244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.345277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.345446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.345477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.345591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.345622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.345743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.345774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.345908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.345939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.346050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.346082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.346254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.346288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.346459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.346491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.346607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.346639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.346809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.346839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.346958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.346990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.347108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.347139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.347270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.347303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.347471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.347502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.347636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.347666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.347878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.347909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.348079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.348110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.348239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.348271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.348382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.348412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.348526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.348557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.348793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.348824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.349099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.349130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.349258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.349291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.349401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.349437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.349547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.349578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.349684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.349715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.349898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.349930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.350111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.350142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.350278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.350310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.350435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.350465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.350562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-12-09 17:38:12.350593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-12-09 17:38:12.350707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.350738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.350863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.350894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.351058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.351090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.351190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.351223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.351416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.351447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.351563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.351594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.351749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.351781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.351902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.351933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.352060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.352092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.352257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.352291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.352408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.352439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.352539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.352570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.352679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.352709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.352822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.352854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.352986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.353016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.353116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.353147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.353261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.353293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.353465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.353496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.353668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.353699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.353853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.353925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.354052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.354087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.354272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.354308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.354435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.354467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.354644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.354675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.354862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.354893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.355010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.355045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.355234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.355267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.355387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.355418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.355537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.355568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.355676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.355707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.355834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.355864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.355985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.356017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.356139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-12-09 17:38:12.356200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-12-09 17:38:12.356402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.356434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.356539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.356570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.356683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.356714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.356905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.356936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.357189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.357223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.357328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.357360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.357477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.357508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.357696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.357728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.357904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.357935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.358063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.358095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.358274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.358322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.358433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.358463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.358711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.358742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.358926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.358958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.359081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.359113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.359250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.359283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.359395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.359426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.359529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.359560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.359683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.359714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.359843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.359873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.360050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.360081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.360198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.360231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.360412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.360442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.360710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.360742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.360864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.360895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.361007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.361038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.361145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.361187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.361320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.361351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.361592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.361623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.361815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.361845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.362043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.362076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.362192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.362225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.362363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.362395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.362601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.362633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.362820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.362851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.363053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.363085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.363206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.363238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.363421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.363454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.363640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-12-09 17:38:12.363672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-12-09 17:38:12.363808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.363845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.363952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.363983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.364239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.364272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.364399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.364431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.364669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.364699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.364819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.364850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.365030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.365062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.365238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.365271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.365462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.365493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.365663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.365694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.365802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.365833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.365947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.365978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.366177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.366210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.366342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.366374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.366487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.366520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.366654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.366684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.366873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.366904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.367028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.367067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.367190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.367223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.367425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.367456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.367713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.367744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.367857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.367888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.368059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.368091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.368218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.368251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.368368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.368399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.368507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.368538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.368727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.368762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.368968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.369001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.369116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.369148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.369346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.369378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.369493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.369525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.369641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.369673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.369847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.369878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.370063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.370095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.370332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.370365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.370517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.370550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.370672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.370704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.370829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.370860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-12-09 17:38:12.370985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-12-09 17:38:12.371017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.371148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.371192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.371452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.371492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.371733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.371764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.371952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.371984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.372108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.372140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.372262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.372294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.372408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.372440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.372606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.372637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.372755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.372787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.372900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.372931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.373178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.373211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.373400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.373431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.373553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.373585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.373841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.373872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.373972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.374004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.374201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.374235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.374416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.374448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.374578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.374610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.374785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.374816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.374945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.374976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.375102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.375133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.375245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.375278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.375383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.375414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.375527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.375558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.375670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.375703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.375826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.375858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.376094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.376126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.376328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.376361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.376564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.376596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.376704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.376736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.376853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.376884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.377072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.377103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.377234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.377268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.377500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.377531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.377645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.377676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.377785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.377817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.378027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.378059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.378182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.378215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-12-09 17:38:12.378453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-12-09 17:38:12.378485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.378593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.378625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.378794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.378825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.378930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.378968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.379139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.379182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.379356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.379387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.379566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.379597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.379719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.379750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.379850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.379881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.380000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.380032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.380150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.380192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.380315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.380347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.380451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.380483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.380615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.380647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.380889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.380920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.381022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.381053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.381187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.381220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.381343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.381374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.381481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.381512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.381623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.381655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.381764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.381795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.381977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.382009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.382201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.382234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.382343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.382374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.382490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.382522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.382694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.382725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.382831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.382863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.382974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.383005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.383126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.383157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.383272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.383304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.383512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.383585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.383804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.383839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.383946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.383978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.384151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.384201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.384317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-12-09 17:38:12.384348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-12-09 17:38:12.384453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.384484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.384585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.384615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.384737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.384769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.384879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.384909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.385087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.385119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.385246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.385280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.385518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.385549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.385669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.385700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.385904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.385934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.386068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.386101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.386276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.386311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.386427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.386458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.386633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.386669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.386843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.386875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.386988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.387017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.387135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.387179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.387304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.387336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.387505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.387534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.387716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.387746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.387860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.387891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.388028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.388059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.388193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.388225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.388332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.388372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.388479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.388510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.388643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.388674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.388779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.388810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.389056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.389088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.389196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.389227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.389367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.389400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.389521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.389552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.389668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.389701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.389805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.389836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.389959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.389991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.390093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.390125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.390253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.390285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.390392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.390425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.391771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.391827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.391958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.391992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.392107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.392138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-12-09 17:38:12.392296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-12-09 17:38:12.392330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.392441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.392473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.392676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.392709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.392886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.392918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.393041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.393071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.393206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.393240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.393357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.393388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.393625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.393656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.393790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.393822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.393937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.393968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.394090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.394128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.394310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.394344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.394474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.394506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.394624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.394655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.394771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.394803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.394903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.394933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.395103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.395134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.395338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.395371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.395609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.395641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.395813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.395844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.396020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.396049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.396180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.396212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.396452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.396484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.396743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.396775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.396954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.396986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.397114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.397145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.397286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.397319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.397432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.397463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.397570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.397603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.397775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.397807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.397980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.398011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.398242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.398275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.398383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.398414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.398588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.398619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.398808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.398839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.399103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.399136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.399246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.399279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.399446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.399477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.399666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.399698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.399965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-12-09 17:38:12.399996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-12-09 17:38:12.400182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.400214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.400391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.400423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.400599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.400630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.400742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.400774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.401028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.401060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.401240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.401274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.401396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.401428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.401552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.401584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.401708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.401740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.401918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.401949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.402125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.402157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.402523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.402559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.402744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.402776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.402881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.402913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.403035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.403067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.403189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.403222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.403418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.403450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.403632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.403663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.403842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.403874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.403997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.404029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.404203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.404236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.404435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.404468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.404729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.404761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.404890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.404921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.405190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.405222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.405332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.405365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.405477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.405508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.405799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.405830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.405943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.405975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.406148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.406187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.406456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.406489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.406604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.406635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.406738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.406770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.406875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.406907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.407075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.407106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.407350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.407385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.407562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.407593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.407726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.407757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.407887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.407925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-12-09 17:38:12.408094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-12-09 17:38:12.408128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.408323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.408356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.408471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.408504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.408631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.408662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.408777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.408809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.409046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.409077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.409251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.409286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.409396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.409427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.409562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.409592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.409877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.409909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.410189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.410222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.410348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.410379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.410554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.410586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.410721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.410753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.411039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.411071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.411179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.411211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.411383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.411415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.411547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.411578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.411687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.411719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.411830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.411861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.411968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.411999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.412201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.412234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.412414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.412446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.412627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.412657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.412830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.412861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.412990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.413021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.413122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.413160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.413391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.413425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.413602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.413633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.413825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.413856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.414029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.414059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.414189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.414224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.414463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.414495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.414663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.414694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.414862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.414895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.415204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.415238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.415496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.415528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.415725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.415757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.415944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.415975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.416184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.416217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-12-09 17:38:12.416432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-12-09 17:38:12.416465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.416570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.416602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.416719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.416750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.417009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.417040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.417165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.417207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.417385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.417417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.417532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.417563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.417692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.417724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.417902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.417934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.418101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.418132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.418263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.418297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.418411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.418443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.418644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.418676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.418843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.418882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.419018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.419050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.419277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.419312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.419521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.419553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.419727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.419759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.419947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.419979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.420112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.420144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.420272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.420305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.420491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.420523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.420692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.420724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.420854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.420886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.421094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.421126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.421310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.421344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.421600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.421633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.421746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.421778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.421908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.421940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.422138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.422176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.422344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.422375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.422569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.422600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.422741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.422774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.422901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.422933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.423047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.423079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-12-09 17:38:12.423262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-12-09 17:38:12.423295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.423466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.423498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.423668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.423700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.423885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.423917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.424154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.424201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.424327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.424360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.424494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.424526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.424741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.424773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.424960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.424992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.425091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.425123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.425264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.425296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.425409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.425441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.425574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.425605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.425864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.425896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.426010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.426042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.426210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.426243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.426525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.426556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.426811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.426843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.427017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.427050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.427191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.427226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.427414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.427446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.427643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.427675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.427796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.427827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.428003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.428034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.428248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.428280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.428377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.428409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.428607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.428639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.428834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.428865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.429058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.429090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.429259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.429293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.429471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.429502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.429621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.429652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.429828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.429859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.430058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.430091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.430272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.430306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.430516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.430547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.430652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.430683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.430871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.430903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.431089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.431120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.431402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.431436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.142 [2024-12-09 17:38:12.431639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.142 [2024-12-09 17:38:12.431671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.142 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.431780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.431812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.432051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.432083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.432262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.432296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.432560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.432592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.432766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.432798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.432985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.433028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.433302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.433335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.433518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.433550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.433723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.433753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.433938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.433969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.434107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.434140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.434267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.434298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.434464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.434496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.434605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.434637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.434815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.434846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.435050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.435083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.435212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.435246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.435372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.435404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.435511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.435542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.435666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.435699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.435883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.435915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.436034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.436065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.436191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.436224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.436342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.436373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.436481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.436513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.436639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.436671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.436862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.436894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.437002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.437033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.437163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.437202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.437390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.437422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.437606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.437637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.437769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.437801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.437986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.438025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.438140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.438179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.438356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.438391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.438634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.438666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.438871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.438903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.439022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.439053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.439223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.143 [2024-12-09 17:38:12.439256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.143 qpair failed and we were unable to recover it. 00:27:46.143 [2024-12-09 17:38:12.439391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.439422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.439542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.439572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.439687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.439718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.439839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.439870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.439975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.440005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.440124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.440157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.440272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.440303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.440514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.440545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.440670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.440703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.440837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.440868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.440979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.441009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.441243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.441277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.441386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.441417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.441542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.441573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.441834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.441865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.441968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.441998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.442176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.442210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.442346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.442378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.442505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.442536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.442769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.442800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.442903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.442934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.443121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.443152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.443342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.443375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.443511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.443542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.443726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.443759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.443863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.443895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.444021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.444053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.444157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.444194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.444316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.444347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.444512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.444544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.444762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.444793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.444984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.445016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.445190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.445224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.445408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.445440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.445569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.445601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.445773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.445805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.445910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.445942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.446204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.446237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.446355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.446388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.144 [2024-12-09 17:38:12.446574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.144 [2024-12-09 17:38:12.446604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.144 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.446775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.446807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.447055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.447087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.447206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.447237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.447410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.447442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.447645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.447677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.447799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.447830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.448003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.448033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.448206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.448238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.448425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.448457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.448559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.448589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.448757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.448789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.449032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.449063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.449196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.449234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.449337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.449367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.449537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.449569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.449685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.449715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.449882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.449914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.450106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.450138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.450321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.450355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.450492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.450523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.450663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.450694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.450828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.450865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.451033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.451064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.451233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.451266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.451448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.451479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.451583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.451614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.451737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.451768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.451940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.451972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.452155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.452196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.452461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.452493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.452677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.452709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.452878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.452909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.453028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.453061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.453246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.453280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.145 [2024-12-09 17:38:12.453423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.145 [2024-12-09 17:38:12.453453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.145 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.453578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.453610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.453846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.453879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.454080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.454112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.454322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.454355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.454466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.454497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.454675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.454707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.454813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.454844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.455038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.455068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.455241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.455274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.455449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.455481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.455666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.455698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.455872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.455903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.456027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.456058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.456295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.456334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.456604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.456635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.456818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.456849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.457029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.457060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.457232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.457265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.457463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.457495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.457622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.457652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.457886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.457917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.458037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.458068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.458309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.458341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.458451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.458483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.458698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.458730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.458988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.459019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.459128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.459159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.459456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.459488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.459592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.459623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.459739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.459770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.459953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.459985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.460101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.460133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.460315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.460348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.460451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.460481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.460648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.460679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.460846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.460878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.460996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.461027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.461214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.461248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.461354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.146 [2024-12-09 17:38:12.461385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.146 qpair failed and we were unable to recover it. 00:27:46.146 [2024-12-09 17:38:12.461519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.461550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.461797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.461834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.462010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.462041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.462220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.462252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.462481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.462515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.462685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.462715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.462889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.462920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.463034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.463066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.463256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.463288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.463409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.463441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.463552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.463584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.463705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.463736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.463925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.463956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.464075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.464106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.464291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.464324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.464444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.464477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.464594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.464626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.464796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.464828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.464994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.465025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.465157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.465201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.465301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.465331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.465450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.465482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.465580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.465611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.465851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.465883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.465997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.466029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.466209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.466243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.466372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.466405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.466517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.466548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.466729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.466761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.466865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.466896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.467062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.467093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.467206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.467239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.467350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.467381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.467496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.467527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.467633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.467664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.467830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.467861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.468028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.468060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.468234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.468275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.468400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.468431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.468555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.147 [2024-12-09 17:38:12.468587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.147 qpair failed and we were unable to recover it. 00:27:46.147 [2024-12-09 17:38:12.468756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.468787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.468956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.468987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.469106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.469139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.469361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.469432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.469668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.469705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.469970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.470003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.470112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.470144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.470337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.470370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.470493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.470525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.470737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.470769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.471025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.471056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.471244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.471278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.471450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.471482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.471619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.471651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.471883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.471914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.472110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.472141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.472335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.472367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.472600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.472631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.472734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.472766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.472949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.472980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.473151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.473192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.473373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.473405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.473647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.473678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.473846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.473878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.474059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.474092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.474214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.474247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.474432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.474464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.474640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.474671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.474854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.474886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.475130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.475163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.475283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.475314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.475486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.475518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.475691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.475723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.475837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.475868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.476075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.476107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.476353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.476386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.476558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.476589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.476820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.476852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.476955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.148 [2024-12-09 17:38:12.476987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.148 qpair failed and we were unable to recover it. 00:27:46.148 [2024-12-09 17:38:12.477155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.477198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.477433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.477464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.477664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.477696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.477864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.477901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.478070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.478102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.478217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.478250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.478431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.478462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.478575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.478607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.478784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.478815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.478930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.478962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.479221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.479254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.479443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.479474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.479644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.479675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.479880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.479912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.480092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.480124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.480303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.480335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.480438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.480469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.480674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.480707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.480994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.481025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.481192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.481225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.481392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.481423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.481543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.481574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.481758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.481790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.482049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.482080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.482211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.482244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.482364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.482395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.482523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.482554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.482798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.482829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.483030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.483061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.483184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.483215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.483400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.483432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.483617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.483648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.483760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.483791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.483904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.483935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.484184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.484217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.149 qpair failed and we were unable to recover it. 00:27:46.149 [2024-12-09 17:38:12.484392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.149 [2024-12-09 17:38:12.484424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.484551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.484582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.484815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.484846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.484963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.484994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.485185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.485218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.485338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.485371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.485492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.485524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.485644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.485675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.485909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.485946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.486067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.486099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.486300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.486333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.486452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.486484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.486665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.486698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.486937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.486968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.487070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.487102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.487274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.487306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.487431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.487463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.487579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.487610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.487894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.487925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.488043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.488075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.488243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.488276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.488376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.488407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.488600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.488632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.488817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.488849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.489031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.489062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.489191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.489224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.489504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.489535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.489715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.489747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.489867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.489898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.490137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.490192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.490455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.490487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.490658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.490689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.490858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.490889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.491092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.491123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.491341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.491374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.491571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.491603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.491781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.491812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.491983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.492014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.492205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.492238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.150 [2024-12-09 17:38:12.492356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.150 [2024-12-09 17:38:12.492387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.150 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.492504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.492536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.492783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.492815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.492990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.493021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.493192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.493225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.493333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.493365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.493535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.493566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.493751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.493783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.493978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.494010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.494274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.494312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.494480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.494512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.494695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.494726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.494896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.494928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.495114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.495146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.495277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.495308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.495430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.495461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.495725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.495757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.495980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.496012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.496217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.496251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.496492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.496524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.496715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.496746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.496989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.497021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.497261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.497294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.497471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.497504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.497739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.497771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.498028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.498059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.498297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.498329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.498513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.498544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.498800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.498831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.499072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.499103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.499304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.499337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.499504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.499535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.499708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.499739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.499977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.500009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.500260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.500293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.500463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.500494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.500716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.500748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.500873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.500904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.501142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.501192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.151 [2024-12-09 17:38:12.501377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.151 [2024-12-09 17:38:12.501407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.151 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.501525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.501557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.501680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.501711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.501887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.501919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.502153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.502193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.502498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.502531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.502792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.502825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.502995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.503026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.503268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.503301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.503543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.503575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.503757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.503795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.504061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.504092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.504348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.504381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.504672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.504704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.504962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.504995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.505172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.505205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.505314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.505345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.505525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.505557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.505746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.505778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.505915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.505946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.506055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.506086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.506266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.506299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.506534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.506566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.506821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.506852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.507026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.507058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.507188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.507221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.507455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.507486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.507760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.507791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.507957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.507989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.508223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.508255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.508514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.508546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.508831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.508862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.509140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.509192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.509476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.509508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.509761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.509792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.509971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.510003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.510266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.510299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.510542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.510574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.510807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.510839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.152 [2024-12-09 17:38:12.511082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.152 [2024-12-09 17:38:12.511113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.152 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.511422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.511455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.511708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.511740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.511922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.511953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.512136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.512174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.512437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.512469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.512655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.512686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.512871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.512903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.513085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.513117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.513412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.513445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.513616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.513647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.513889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.513927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.514211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.514244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.514374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.514405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.514591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.514623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.514824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.514854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.515130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.515162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.515428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.515461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.515703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.515735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.515856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.515888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.516133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.516164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.516432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.516464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.516604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.516635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.516814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.516846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.517088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.517119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.517371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.517405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.517646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.517676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.517935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.517966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.518225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.518258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.518547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.518578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.518847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.518878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.519087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.519119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.519344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.519377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.519654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.519685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.519899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.519931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.520115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.520146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.520410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.520443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.520648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.520680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.520935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.520967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.521145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.153 [2024-12-09 17:38:12.521203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.153 qpair failed and we were unable to recover it. 00:27:46.153 [2024-12-09 17:38:12.521379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.521410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.521595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.521627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.521826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.521857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.522051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.522083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.522278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.522312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.522546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.522577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.522839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.522870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.523064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.523096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.523354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.523386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.523577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.523608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.523868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.523899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.524184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.524221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.524491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.524523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.524801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.524833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.525092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.525123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.525421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.525453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.525589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.525620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.525794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.525826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.526098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.526129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.526315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.526347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.526556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.526588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.526848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.526880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.527068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.527099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.527277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.527311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.527547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.527579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.527870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.527902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.528088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.528118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.528315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.528349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.528585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.528616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.528818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.528849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.529021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.529052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.529292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.529325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.529513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.529544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.529731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.529763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.154 [2024-12-09 17:38:12.529996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.154 [2024-12-09 17:38:12.530027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.154 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.530140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.530178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.530475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.530507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.530751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.530782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.530959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.530990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.531119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.531149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.531419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.531452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.531756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.531788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.531995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.532026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.532217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.532250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.532511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.532542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.532824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.532855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.533133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.533164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.533483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.533514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.533757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.533788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.533913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.533944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.534133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.534164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.534350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.534389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.534705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.534737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.535015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.535047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.535314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.535346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.535527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.535559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.535746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.535778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.536017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.536048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.536285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.536318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.536558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.536589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.536851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.536883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.537070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.537100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.537363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.537396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.537682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.537713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.537954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.537984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.538261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.538295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.538576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.538608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.538742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.538774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.538959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.538990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.539252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.539285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.539456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.539488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.539757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.539788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.539978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.540008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.155 [2024-12-09 17:38:12.540134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.155 [2024-12-09 17:38:12.540175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.155 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.540440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.540472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.540679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.540710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.540815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.540846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.541052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.541083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.541266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.541300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.541562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.541593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.541762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.541793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.542080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.542111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.542291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.542323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.542496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.542528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.542795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.542827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.543031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.543061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.543298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.543331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.543594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.543625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.543912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.543943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.544143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.544183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.544374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.544406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.544668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.544706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.544989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.545021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.545263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.545297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.545485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.545516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.545721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.545752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.545878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.545910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.546090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.546121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.546310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.546343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.546525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.546556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.546799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.546830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.547013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.547044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.547331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.547364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.547633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.547664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.547949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.547981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.548257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.548290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.548542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.548572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.548811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.548841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.549079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.549110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.549317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.549350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.549612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.549642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.549882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.549913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.156 [2024-12-09 17:38:12.550149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.156 [2024-12-09 17:38:12.550192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.156 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.550376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.550407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.550594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.550625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.550805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.550836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.551120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.551150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.551301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.551333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.551578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.551610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.551778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.551808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.552045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.552075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.552359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.552393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.552674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.552705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.552989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.553019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.553301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.553334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.553598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.553628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.553800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.553830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.554020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.554050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.554324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.554357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.554631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.554662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.554946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.554976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.555157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.555203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.555464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.555496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.555686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.555717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.555927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.555957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.556138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.556184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.556306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.556337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.556559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.556589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.556847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.556878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.557060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.557090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.557198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.557231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.557402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.557433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.557622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.557652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.557917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.557948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.558236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.558268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.558542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.558573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.558742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.558773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.559037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.559067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.559200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.559233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.559540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.559571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.559852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.559882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.157 [2024-12-09 17:38:12.560174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.157 [2024-12-09 17:38:12.560206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.157 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.560422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.560453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.560689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.560720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.560911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.560941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.561123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.561155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.561438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.561469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.561662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.561693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.561956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.561988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.562253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.562285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.562414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.562445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.562709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.562740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.562946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.562976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.563227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.563259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.563527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.563558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.563728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.563759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.563949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.563980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.564184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.564217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.564491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.564522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.564808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.564838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.565118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.565149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.565433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.565472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.565723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.565754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.566018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.566048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.566294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.566349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.566644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.566675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.566926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.566957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.567217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.567250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.567496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.567526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.567791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.567822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.568059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.568091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.568285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.568317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.568449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.158 [2024-12-09 17:38:12.568480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.158 qpair failed and we were unable to recover it. 00:27:46.158 [2024-12-09 17:38:12.568674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.568705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.568889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.568924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.569105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.569137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.569410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.569443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.569640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.569672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.569930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.569962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.570253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.570286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.570471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.570502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.570764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.570796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.570968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.571000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.571189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.571222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.571431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.571463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.571646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.571677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.571852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.571884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.572070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.572102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.572231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.572265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.572530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.572562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.572755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.572787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.573042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.573073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.573269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.573302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.573571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.573603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.573793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.573824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.574005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.574036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.574242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.574276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.574462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.574493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.574757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.574788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.575073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.575103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.575289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.575321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.575570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.575607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.575914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.575945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.576195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.576228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.576499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.576530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.576819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.576849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.577133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.577164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.577446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.577478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.577675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.577706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.577878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.577909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.578155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.578199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.578490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.578521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.159 qpair failed and we were unable to recover it. 00:27:46.159 [2024-12-09 17:38:12.578785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.159 [2024-12-09 17:38:12.578816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.579106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.579138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.579417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.579450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.579722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.579753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.580042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.580073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.580345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.580378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.580622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.580655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.580854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.580884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.581007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.581038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.581176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.581209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.581493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.581524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.581811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.581842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.582146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.582185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.582467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.582498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.582767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.582798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.583040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.583071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.583255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.583289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.583553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.583584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.583718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.583750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.583944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.583975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.584237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.584270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.584467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.584498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.584756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.584787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.585052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.585084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.585372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.585405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.585675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.585706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.585898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.585929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.586183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.586215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.586420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.586451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.586719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.586756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.586898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.586929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.587190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.587223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.587437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.587468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.587716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.587747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.588013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.588045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.588186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.588218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.588463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.588494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.588738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.588768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.588991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.589022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.160 qpair failed and we were unable to recover it. 00:27:46.160 [2024-12-09 17:38:12.589161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.160 [2024-12-09 17:38:12.589205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.589474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.589505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.589781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.589812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.590014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.590045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.590227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.590260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.590476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.590507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.590700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.590732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.590840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.590871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.591136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.591175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.591446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.591478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.591765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.591796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.591990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.592021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.592283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.592316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.592558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.592588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.592846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.592877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.593122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.593153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.593377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.593409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.593733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.593765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.594054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.594085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.594300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.594333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.594553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.594584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.594825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.594856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.595059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.595090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.595276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.595309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.595504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.595536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.595800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.595831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.596115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.596147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.596341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.596374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.596552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.596583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.596776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.596807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.597082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.597113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.597268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.597301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.597491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.597522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.597707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.597738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.597978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.598008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.598276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.598310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.598602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.598633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.598907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.598937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.599120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.599151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.161 qpair failed and we were unable to recover it. 00:27:46.161 [2024-12-09 17:38:12.599281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.161 [2024-12-09 17:38:12.599313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.599595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.599626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.599808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.599839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.600129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.600160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.600362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.600395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.600575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.600607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.600898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.600928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.601045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.601076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.601386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.601420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.601673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.601705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.602014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.602046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.602320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.602353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.602564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.602596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.602839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.602870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.603162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.603203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.603491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.603522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.603790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.603821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.604048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.604080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.604199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.604237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.604434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.604466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.604662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.604693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.604878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.604910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.605189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.605223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.605469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.605502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.605688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.605719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.605960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.605991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.606176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.606210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.606482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.606513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.606796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.606828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.607035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.607067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.607326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.607360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.607656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.607688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.607936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.607969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.608184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.608218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.608442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.608474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.608769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.608799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.609071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.609103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.609398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.609432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.609700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.609732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.162 [2024-12-09 17:38:12.609952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.162 [2024-12-09 17:38:12.609983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.162 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.610183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.610216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.610483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.610516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.610695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.610727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.610972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.611003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.611149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.611191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.611375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.611408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.611599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.611630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.611900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.611931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.612130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.612162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.612367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.612398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.612574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.612605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.612779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.612812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.613095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.613126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.613453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.613487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.613685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.613716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.613984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.614015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.614306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.614339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.614613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.614644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.614832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.614870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.615115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.615146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.615372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.615404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.615579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.615610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.615879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.615909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.616153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.616195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.616493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.616524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.616747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.616778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.617061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.617092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.617317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.617350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.617542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.617572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.617839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.617870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.618062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.618094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.618360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.618393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.618673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.618705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.618992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.163 [2024-12-09 17:38:12.619023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.163 qpair failed and we were unable to recover it. 00:27:46.163 [2024-12-09 17:38:12.619234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.619267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.619447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.619478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.619676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.619708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.619887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.619918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.620174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.620207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.620479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.620510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.620780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.620811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.621031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.621062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.621357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.621390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.621686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.621717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.621989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.622020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.622165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.622210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.622483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.622514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.622711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.622743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.622997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.623028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.623222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.623255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.623548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.623579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.623887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.623918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.624095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.624127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.624409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.624442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.624641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.624672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.624889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.624920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.625198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.625232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.625376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.625408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.625681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.625718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.625946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.625978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.626254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.626286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.626486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.626517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.626697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.626728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.627016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.627047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.627308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.627341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.627560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.627592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.627786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.627817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.628013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.628044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.628237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.628269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.628493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.628524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.628726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.628757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.628942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.628973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.629202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.164 [2024-12-09 17:38:12.629235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.164 qpair failed and we were unable to recover it. 00:27:46.164 [2024-12-09 17:38:12.629448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.629481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.629689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.629721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.629971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.630002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.630253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.630287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.630541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.630573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.630768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.630799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.631050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.631081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.631331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.631365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.631674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.631708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.631987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.632018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.632216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.632250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.632450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.632483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.632741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.632773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.633051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.633083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.633353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.633387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.633683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.633714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.633981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.634012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.634309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.634342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.634613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.634644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.634940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.634971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.635186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.635219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.635519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.635550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.635830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.635860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.636136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.636178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.636432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.636464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.636595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.636632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.636829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.636861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.637134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.637165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.637410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.637443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.637660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.637692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.637891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.637922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.638185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.638217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.638522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.638554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.638812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.638843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.639033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.639064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.639320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.639354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.639561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.639593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.639728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.639761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.640012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.165 [2024-12-09 17:38:12.640044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.165 qpair failed and we were unable to recover it. 00:27:46.165 [2024-12-09 17:38:12.640303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.640336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.640517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.640548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.640821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.640853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.641051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.641082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.641285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.641319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.641514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.641544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.641730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.641762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.642015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.642046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.642188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.642221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.642426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.642458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.642731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.642762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.643050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.643084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.643362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.643398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.643683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.643718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.643993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.644025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.644299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.644334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.644475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.644507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.644776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.644808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.644987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.645021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.645294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.645328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.645546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.645578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.645853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.645887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.646090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.646121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.166 [2024-12-09 17:38:12.646328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.166 [2024-12-09 17:38:12.646361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.166 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.646642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.646674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.646929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.646960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.647163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.647219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.647479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.647513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.647787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.647818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.648087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.648118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.648418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.648455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.648651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.648683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.648887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.648918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.649131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.649163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.649368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.649399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.649648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.649679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.649867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.649899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.650151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.650198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.650473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.650504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.650785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.650817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.651020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.651052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.651326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.651360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.651502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.651533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.651783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.651816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.652026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.652058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.652385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.652417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.652619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.652650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.652868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.652901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.653101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.653132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.653418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.653451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.653658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.653689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.653920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.653951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.654179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.654213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.654498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.654532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.654719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.654750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.655015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.655048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.655272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.655308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.655581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.655612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.655898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.655930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.656207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-12-09 17:38:12.656240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-12-09 17:38:12.656353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.656383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.656565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.656596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.656774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.656806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.657016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.657047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.657294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.657328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.657521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.657553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.657782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.657821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.658096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.658127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.658329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.658363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.658502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.658533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.658849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.658881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.659133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.659177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.659390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.659424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.659626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.659658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.659962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.659994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.660259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.660294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.660506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.660537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.660767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.660800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.660940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.660971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.661266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.661301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.661594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.661626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.661901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.661934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.662217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.662250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.662548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.662579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.662780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.662812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.663081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.663113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.663403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.663436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.663713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.663745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.664006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.664036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.664293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.664327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.664621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.664653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.664795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.664826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.665005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.665037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.665248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.665282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.665552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.665584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.665865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.665897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.666157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.666202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.666468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.666500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.666681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-12-09 17:38:12.666713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-12-09 17:38:12.666923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.666955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.667160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.667201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.667476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.667507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.667708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.667740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.667992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.668024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.668313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.668347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.668527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.668560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.668816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.668854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.669000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.669037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.669309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.669343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.669621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.669654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.669874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.669906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.670083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.670115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.670319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.670353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.670490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.670521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.670792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.670825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.671050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.671082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.671284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.671319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.671523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.671555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.671813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.671846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.672035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.672067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.672353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.672387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.672647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.672680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.672884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.672916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.673189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.673223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.673423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.673456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.673585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.673617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.673811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.673843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.674058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.674090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.674358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.674392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.674692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.674724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.674989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.675021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.675242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.675276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.675480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.675512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.675789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.675823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.676041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.676073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.676337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.676372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.676670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.676702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.676909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.676941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-12-09 17:38:12.677123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-12-09 17:38:12.677155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.677369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.677403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.677677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.677709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.677966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.677998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.678200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.678233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.678501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.678534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.678668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.678699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.678974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.679005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.679186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.679225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.679408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.679440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.679690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.679721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.679906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.679938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.680147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.680190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.680388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.680420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.680668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.680700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.680819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.680851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.681123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.681155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.681374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.681409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.681535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.681567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.681778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.681810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.682061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.682093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.682294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.682328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.682608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.682641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.682842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.682873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.683127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.683159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.683439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.683471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.683601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.683633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.683826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.683858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.684156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.684196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.684410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.684442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.684703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.684737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.684981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.685015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.685291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.685325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.685551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.685583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.685765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.685796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.686003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.686036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.686220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.686253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.686572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.686603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.686853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.686885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-12-09 17:38:12.687084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-12-09 17:38:12.687115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.687319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.687352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.687470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.687502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.687699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.687732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.687946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.687980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.688231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.688265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.688568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.688601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.688888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.688920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.689134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.689176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.689359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.689399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.689648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.689683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.689991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.690025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.690315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.690350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.690627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.690659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.690935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.690966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.691178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.691213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.691512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.691544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.691803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.691835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.692142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.692184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.692443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.692477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.692758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.692792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.693085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.693117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.693352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.693386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.693574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.693606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.693858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.693890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.694139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.694185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.694482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.694515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.694770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.694804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.695091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.695125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.695384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.695420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.695638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.695670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.695889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.695923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.696190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.696225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.696355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.696389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-12-09 17:38:12.696522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-12-09 17:38:12.696554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.696830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.696862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.697064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.697098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.697374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.697408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.697691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.697723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.697978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.698010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.698264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.698299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.698597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.698629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.698898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.698931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.699208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.699241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.699455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.699487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.699671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.699703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.699952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.699987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.700306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.700339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.700625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.700658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.700889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.700933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.701115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.701147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.701427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.701461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.701613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.701648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.701849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.701881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.702066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.702098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.702353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.702389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.702592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.702625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.702894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.702926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.703105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.703137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.703372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.703406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.703586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.703619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.703896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.703930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.704072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.704104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.704320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.704355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.704619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.704651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.704925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.704956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.705076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.705108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.705256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.705291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.705496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.705527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.705792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.705826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.706084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.706117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.706340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.706374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.706627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.706659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-12-09 17:38:12.706852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-12-09 17:38:12.706886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.707079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.707111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.707398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.707432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.707793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.707870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.708098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.708134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.708357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.708392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.708669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.708701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.708917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.708950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.709092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.709125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.709445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.709478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.709704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.709736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.710056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.710088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.710403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.710439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.710644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.710676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.710816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.710849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.711050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.711084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.711336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.711369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.711615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.711647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.711853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.711885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.712192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.712226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.712422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.712455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.712590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.712622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.712762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.712794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.712996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.713029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.713161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.713208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.713360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.713393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.713514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.713543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.713820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.713852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.714059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.714091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.714297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.714330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.714543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.714582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.714835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.714867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.715080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.715112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.715335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.715368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.715570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.715603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.715821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.715854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.716155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.716199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.716454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.716487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.716708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.716739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.717013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-12-09 17:38:12.717045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-12-09 17:38:12.717255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.717290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.717497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.717529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.717804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.717839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.718110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.718144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.718371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.718404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.718629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.718661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.718908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.718939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.719138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.719175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.719463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.719497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.719771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.719803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.720086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.720118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.720405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.720439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.720644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.720676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.720941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.720973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.721107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.721139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.721375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.721409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.721691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.721724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.721922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.721960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.722252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.722286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.722396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.722426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.722704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.722737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.722953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.722985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.723183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.723219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.723412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.723444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.723720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.723754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.723874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.723908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.724189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.724223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.724444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.724477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.724657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.724690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.724939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.724971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.725158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.725210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.725346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.725379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.725688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.725720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.725994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.726027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.726223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.726258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.726518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.726549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.726762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.726794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.726988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.727020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.727155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.727198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-12-09 17:38:12.727410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-12-09 17:38:12.727443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.727637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.727669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.727890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.727923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.728138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.728181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.728373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.728406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.728659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.728697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.728883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.728914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.729095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.729127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.729396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.729429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.729642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.729674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.729945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.729977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.730115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.730146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.730365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.730398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.730627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.730658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.730792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.730824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.731019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.731051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.731248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.731282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.731485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.731517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.731704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.731736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.732073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.732151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.732385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.732423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.732685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.732721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.732860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.732893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.733032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.733065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.733258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.733292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.733595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.733626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.733846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.733878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.734088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.734120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.734442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.734476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.734674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.734707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.734990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.735023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.735284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.735317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.735456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.735498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.735722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.735753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.736003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.736034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-12-09 17:38:12.736289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-12-09 17:38:12.736322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.736502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.736534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.736738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.736770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.737038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.737070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.737371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.737404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.737625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.737656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.737902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.737934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.738199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.738233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.738531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.738563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.738835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.738867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.739163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.739207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.739347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.739380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.739615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.739648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.739841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.739873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.740184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.740218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.740475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.740507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.740765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.740797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.740928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.740961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.741181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.741215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.741523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.741556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.741773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.741806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.741989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.742021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.742295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.742329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.742605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.742638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.742929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.742962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.743175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.743208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.743463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.743495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.743703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.743735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.744012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.744043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.744266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.744299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.744492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.744524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.744776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.744807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.744989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.745020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.745314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.745347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.745603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.745634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.745900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.745931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.746229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.746263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.746556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.746600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.746816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.746847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-12-09 17:38:12.747097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-12-09 17:38:12.747129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.747364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.747398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.747598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.747630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.747926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.747959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.748249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.748281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.748553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.748586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.748784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.748816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.749037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.749069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.749311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.749345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.749615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.749648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.749947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.749979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.750188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.750221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.750526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.750559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.750814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.750846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.750985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.751017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.751215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.751249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.751524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.751556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.751807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.751839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.752041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.752073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.752284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.752318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.752538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.752570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.752763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.752794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.752911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.752943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.753220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.753253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.753455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.753486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.753764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.753797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.753942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.753973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.754242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.754276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.754566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.754598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.754881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.754913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.755188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.755222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.755419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.755452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.755654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.755686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.755878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.755910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.756206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.756240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.756374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.756406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.756707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.756739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.757028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.757060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.757343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.757383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-12-09 17:38:12.757662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-12-09 17:38:12.757694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.757979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.758011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.758287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.758321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.758450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.758481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.758663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.758694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.758956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.758988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.759193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.759226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.759448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.759480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.759694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.759726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.760001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.760032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.760325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.760358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.760552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.760584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.760859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.760891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.761037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.761070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.761275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.761308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.761579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.761611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.761865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.761897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.762210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.762244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.762477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.762510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.762700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.762731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.762952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.762984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.763257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.763292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.763570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.763602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.763868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.763899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.764123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.764155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.764411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.764443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.764756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.764836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.765062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.765100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.765405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.765442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.765719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.765750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.766036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.766068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.766355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.766388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.766667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.766699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.766887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.766919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.767187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.767220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.767500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.767532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.767816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.767847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.768124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.768155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.768449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.768482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-12-09 17:38:12.768701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-12-09 17:38:12.768733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.768925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.768957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.769153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.769193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.769447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.769479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.769776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.769806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.770081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.770113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.770328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.770362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.770542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.770573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.770837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.770869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.771067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.771099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.771296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.771329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.771603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.771635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.771904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.771937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.772161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.772202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.772396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.772435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.772704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.772735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.773015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.773048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.773269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.773303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.773580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.773612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.773892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.773924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.774068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.774099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.774405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.774439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.774719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.774751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.774980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.775013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.775190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.775224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.775498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.775529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.775813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.775844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.776101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.776134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.776408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.776442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.776633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.776664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.776941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.776974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.777179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.777212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.777408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.777440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.777714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.777746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.778005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.778038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.778293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.778328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.778622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.778654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.778947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.778980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.779255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.779289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.779558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.779590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-12-09 17:38:12.779865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-12-09 17:38:12.779897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.780187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.780228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.780514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.780547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.780809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.780841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.781039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.781072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.781339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.781372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.781675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.781707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.781921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.781953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.782208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.782242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.782431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.782461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.782712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.782743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.782923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.782954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.783142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.783185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.783380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.783411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.783610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.783641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.783831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.783862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.784115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.784147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.784344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.784376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.784624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.784656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.784835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.784866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.785055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.785087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.785289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.785321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.785568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.785599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.785793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.785825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.786106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.786138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.786426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.786458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.786732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.786764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.786986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.787017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.787297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.787337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.787528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.787559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.787831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.787862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.788155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.788197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.788450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.788482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.788687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.788718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.788993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.789025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.789245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-12-09 17:38:12.789279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-12-09 17:38:12.789432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.789463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.789761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.789793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.789989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.790020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.790315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.790348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.790528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.790560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.790835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.790866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.791092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.791125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.791351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.791385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.791633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.791665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.791868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.791900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.792182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.792216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.792516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.792549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.792799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.792831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.792985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.793017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.793197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.793230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.793430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.793461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.793735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.793767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.793983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.794015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.794214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.794248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.794489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.794520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.794733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.794766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.795043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.795075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.795276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.795311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.795571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.795601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.795848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.795880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.796062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.796094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.796285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.796319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.796533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.796565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.796819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.796851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.797114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.797146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.797436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.797469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.797745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.797778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.798068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.798099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.798361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.798406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.798596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.798628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.798929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.798960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.799260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.799294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.799588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.799620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.799820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.799852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.457 [2024-12-09 17:38:12.800109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.457 [2024-12-09 17:38:12.800140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.457 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.800349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.800381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.800635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.800667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.800860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.800892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.801194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.801226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.801371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.801403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.801580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.801613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.801890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.801922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.802230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.802264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.802524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.802555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.802848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.802880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.803158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.803201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.803478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.803509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.803806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.803838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.804108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.804140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.804358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.804390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.804662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.804694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.804976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.805009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.805235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.805268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.805530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.805562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.805835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.805867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.806118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.806156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.806469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.806501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.806642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.806674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.806866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.806897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.807186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.807220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.807431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.807462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.807731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.807763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.808056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.808087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.808344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.808378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.808687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.808718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.808975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.809007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.809235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.809268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.809516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.809549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.809738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.809771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.810025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.810056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.810267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.810300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.810598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.810630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.810836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.810868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.811000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.458 [2024-12-09 17:38:12.811031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.458 qpair failed and we were unable to recover it. 00:27:46.458 [2024-12-09 17:38:12.811280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.811313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.811565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.811598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.811846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.811878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.812134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.812165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.812437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.812470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.812768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.812799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.813073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.813105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.813319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.813352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.813584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.813622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.813827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.813859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.814109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.814140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.814423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.814455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.814729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.814761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.815032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.815064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.815357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.815391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.815667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.815699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.815960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.815991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.816266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.816299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.816579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.816611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.816898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.816929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.817236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.817268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.817533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.817566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.817751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.817783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.817975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.818006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.818273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.818306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.818575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.818606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.818815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.818846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.819051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.819081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.819339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.819373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.819626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.819657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.819927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.819958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.820257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.820290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.820559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.820590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.820769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.820800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.821050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.821082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.821380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.821413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.821684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.821717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.821899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.821930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.822132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.822163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.459 [2024-12-09 17:38:12.822457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.459 [2024-12-09 17:38:12.822490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.459 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.822765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.822796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.822993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.823024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.823201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.823235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.823485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.823516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.823693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.823724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.823998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.824030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.824312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.824345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.824533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.824564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.824746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.824777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.824963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.824996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.825210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.825243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.825445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.825476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.825666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.825697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.825950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.825982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.826230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.826262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.826514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.826546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.826814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.826846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.827144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.827199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.827388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.827420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.827627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.827659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.827937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.827969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.828265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.828298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.828494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.828527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.828735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.828769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.828956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.828990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.829190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.829224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.829497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.829529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.829721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.829752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.830018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.830050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.830186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.830220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.830442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.830475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.830666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.830697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.830978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.831010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.831302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.831337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.460 [2024-12-09 17:38:12.831540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.460 [2024-12-09 17:38:12.831570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.460 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.831826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.831857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.832055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.832092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.832364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.832397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.832593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.832624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.832890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.832922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.833201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.833235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.833519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.833550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.833830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.833862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.834054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.834085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.834353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.834386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.834586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.834618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.834881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.834913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.835103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.835134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.835349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.835382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.835658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.835690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.836001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.836033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.836214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.836247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.836452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.836484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.836761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.836792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.836983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.837015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.837318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.837351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.837613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.837644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.837860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.837892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.838175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.838208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.838464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.838496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.838699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.838730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.838951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.838983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.839233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.839266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.839516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.839553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.839833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.839864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.840091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.840122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.840347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.840380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.840630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.840661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.840769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.840799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.841069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.841101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.841302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.841335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.841525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.841556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.841734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.841765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.461 [2024-12-09 17:38:12.842034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.461 [2024-12-09 17:38:12.842065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.461 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.842270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.842303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.842600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.842631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.842920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.842951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.843233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.843267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.843518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.843550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.843849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.843880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.844137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.844178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.844403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.844435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.844701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.844732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.844986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.845017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.845200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.845233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.845449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.845480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.845746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.845777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.845996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.846027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.846225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.846258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.846449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.846480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.846590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.846621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.846889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.846920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.847142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.847181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.847431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.847463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.847721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.847752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.848051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.848082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.848292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.848325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.848523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.848554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.848811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.848842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.849091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.849122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.849349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.849382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.849581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.849613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.849793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.849824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.850095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.850126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.850407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.850441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.850638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.850671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.850803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.850834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.851079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.851110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.851314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.851348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.851552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.851584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.851853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.851884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.852080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.852112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.852271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.462 [2024-12-09 17:38:12.852303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.462 qpair failed and we were unable to recover it. 00:27:46.462 [2024-12-09 17:38:12.852602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.852633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.852862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.852893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.853176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.853210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.853412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.853443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.853642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.853673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.853972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.854003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.854295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.854329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.854604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.854635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.854814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.854844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.855118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.855149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.855373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.855405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.855604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.855634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.855897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.855929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.856199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.856231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.856526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.856558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.856858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.856890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.857160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.857216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.857410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.857442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.857637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.857674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.857954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.857986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.858287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.858321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.858531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.858562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.858861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.858892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.859187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.859220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.859341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.859373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.859670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.859701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.859902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.859933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.860211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.860244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.860518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.860550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.860748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.860779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.860956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.860987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.861289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.861323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.861628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.861660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.861928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.861960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.862258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.862292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.862593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.862625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.862912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.862943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.863125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.863156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.863479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.863511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.463 qpair failed and we were unable to recover it. 00:27:46.463 [2024-12-09 17:38:12.863709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-12-09 17:38:12.863739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.864026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.864057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.864354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.864386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.864569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.864600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.864800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.864831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.865023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.865054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.865305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.865343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.865523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.865554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.865849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.865879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.866019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.866050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.866237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.866269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.866540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.866571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.866753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.866784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.867057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.867088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.867359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.867392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.867617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.867648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.867844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.867875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.868130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.868161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.868387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.868419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.868595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.868626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.868879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.868910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.869162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.869206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.869504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.869536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.869798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.869829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.870031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.870062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.870262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.870295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.870567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.870599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.870788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.870819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.871090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.871120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.871351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.871384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.871633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.871664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.871975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.872006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.872216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.872249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.872467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.872505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.872613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.872645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.872845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.872875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.873148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.873189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.873388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.873420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.873672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.873703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.464 [2024-12-09 17:38:12.873906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-12-09 17:38:12.873937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.464 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.874138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.874179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.874454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.874486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.874683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.874713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.874973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.875005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.875262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.875296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.875501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.875532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.875798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.875830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.876116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.876148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.876346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.876378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.876574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.876604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.876799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.876831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.877030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.877062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.877341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.877374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.877556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.877587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.877885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.877916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.878189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.878222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.878438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.878472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.878673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.878703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.878972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.879003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.879209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.879242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.879517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.879549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.879834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.879865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.880075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.880105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.880394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.880427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.880702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.880733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.880863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.880894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.881165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.881208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.881489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.881521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.881817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.881848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.882042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.882074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.882299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.882333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.882606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.882637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.882886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.882917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.883194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.883228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.883513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-12-09 17:38:12.883545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.465 qpair failed and we were unable to recover it. 00:27:46.465 [2024-12-09 17:38:12.883751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.883782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.883920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.883950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.884164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.884215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.884418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.884449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.884752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.884784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.885047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.885079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.885334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.885367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.885672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.885703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.885926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.885957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.886186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.886219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.886489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.886521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.886809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.886840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.887127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.887159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.887435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.887469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.887759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.887791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.888073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.888105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.888238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.888272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.888566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.888598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.888873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.888905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.889103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.889134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.889376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.889410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.889562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.889593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.889862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.889894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.890042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.890073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.890373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.890407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.890657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.890688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.891001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.891040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.891314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.891347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.891551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.891582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.891853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.891884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.892080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.892112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.892342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.892376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.892647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.892679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.892959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.892989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.893269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.893303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.893498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.893530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.893782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.893813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.894086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.894117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.894326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-12-09 17:38:12.894360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.466 qpair failed and we were unable to recover it. 00:27:46.466 [2024-12-09 17:38:12.894658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.894693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.894887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.894919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.895119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.895152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.895363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.895396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.895664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.895695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.895908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.895938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.896220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.896254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.896442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.896475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.896735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.896766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.897039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.897070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.897272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.897306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.897418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.897449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.897722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.897752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.897866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.897896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.898094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.898134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.898419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.898453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.898656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.898689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.898987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.899018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.899272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.899306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.899507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.899539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.899812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.899843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.899985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.900016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.900289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.900323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.900577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.900608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.900737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.900768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.900893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.900925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.901204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.901239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.901375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.901407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.901686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.901719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.901902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.901934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.902155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.902200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.902378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.902410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.902668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.902699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.902947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.902979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.903105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.903137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.903428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.903460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.903740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.903772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.904031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.904063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.904325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.904358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.467 [2024-12-09 17:38:12.904658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.467 [2024-12-09 17:38:12.904690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.467 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.904896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.904927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.905191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.905224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.905423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.905455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.905715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.905748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.906024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.906055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.906346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.906379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.906599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.906631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.906943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.906974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.907177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.907210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.907393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.907426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.907701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.907735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.908005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.908038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.908318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.908351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.908637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.908669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.908938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.908970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.909272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.909308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.909571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.909603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.909830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.909861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.910065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.910096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.910366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.910399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.910680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.910714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.910934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.910965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.911219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.911253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.911514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.911546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.911750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.911782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.912034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.912065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.912378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.912411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.912666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.912698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.912922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.912954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.913212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.913247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.913547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.913578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.913840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.913872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.913992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.914022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.914295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.914329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.914529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.914559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.914853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.914882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.915079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.915107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.915383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.915414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.915612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.468 [2024-12-09 17:38:12.915642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.468 qpair failed and we were unable to recover it. 00:27:46.468 [2024-12-09 17:38:12.915927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.915956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.916231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.916264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.916470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.916500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.916776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.916812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.917065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.917096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.917298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.917329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.917579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.917610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.917856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.917886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.918164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.918205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.918406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.918436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.918615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.918646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.918916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.918946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.919222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.919253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.919553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.919583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.919774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.919804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.920077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.920106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.920321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.920353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.920604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.920636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.920888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.920918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.921139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.921196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.921449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.921481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.921673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.921704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.921954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.921985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.922249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.922283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.922480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.922511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.922733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.922765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.923015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.923047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.923318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.923352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.923553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.923584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.923780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.923812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.924006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.924043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.924263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.924297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.924548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.924579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.924862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.924893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.925203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.469 [2024-12-09 17:38:12.925237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.469 qpair failed and we were unable to recover it. 00:27:46.469 [2024-12-09 17:38:12.925554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.925587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.925901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.925932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.926071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.926102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.926211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.926244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.926508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.926538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.926661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.926692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.926904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.926935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.927113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.927144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.927291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.927324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.927628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.927660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.927935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.927968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.928224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.928260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.928498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.928532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.928805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.928839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.929115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.929146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.929379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.929411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.929601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.929633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.929838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.929869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.930071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.930101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.930374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.930407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.930609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.930640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.930895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.930926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.931123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.931160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.931445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.931477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.931678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.931710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.931898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.931929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.932187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.932221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.932524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.932559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.932857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.932888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.933108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.933139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.933375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.933408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.933736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.933767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.933900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.933931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.934062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.934093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.934313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.934346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.934635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.934667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.934871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.934903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.935099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.935130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.935364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.935396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-09 17:38:12.935696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.470 [2024-12-09 17:38:12.935728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.935994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.936029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.936153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.936205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.936409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.936441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.936663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.936696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.936964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.936995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.937252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.937283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.937548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.937579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.937785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.937817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.938079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.938110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.938311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.938346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.938618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.938652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.938936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.938968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.939271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.939304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.939573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.939605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.939893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.939924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.940114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.940145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.940400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.940433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.940694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.940725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.940946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.940977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.941262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.941296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.941493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.941526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.941660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.941693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.941945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.941976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.942281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.942314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.942502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.942533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.942826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.942857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.943035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.943067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.943311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.943344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.943599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.943630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.943778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.943809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.944086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.944117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.944410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.944444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.944717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.944749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.944964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.944995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.945268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.945301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.945502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.945534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.945801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.945833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.946099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.946131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-09 17:38:12.946340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.471 [2024-12-09 17:38:12.946373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.946658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.946689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.946885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.946918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.947098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.947130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.947370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.947404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.947653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.947685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.947820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.947851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.948055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.948085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.948308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.948341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.948523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.948554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.948734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.948765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.949018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.949050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.949236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.949276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.949496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.949528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.949752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.949783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.950070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.950102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.950298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.950334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.950590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.950622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.950918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.950951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.951223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.951257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.951538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.951569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.951858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.951891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.952177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.952211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.952460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.952492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.952634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.952666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.952940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.952973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.953241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.953277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.953568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.953600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.953875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.953907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.954165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.954207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.954505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.954537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.954799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.954831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.955036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.955067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.955266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.955300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.955576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.955607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.955809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.955840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.956112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.956144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.956408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.956441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.956644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.956675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-09 17:38:12.956873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.472 [2024-12-09 17:38:12.956911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.957161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.957224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.957475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.957507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.957810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.957841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.958052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.958084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.958358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.958393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.958651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.958683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.958975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.959008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.959285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.959319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.959513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.959544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.959820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.959851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.960116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.960148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.960379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.960411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.960642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.960674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.960953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.960984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.961178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.961212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.961475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.961506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.961804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.961834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.962110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.962142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.962429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.962461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.962742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.962773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.962919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.962951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.963201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.963235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.963512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.963544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.963741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.963773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.963901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.963932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.964125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.964158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.964351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.964385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.964597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.964632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.964883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.964914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.965212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.965245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.473 [2024-12-09 17:38:12.965515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.473 [2024-12-09 17:38:12.965546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.473 qpair failed and we were unable to recover it. 00:27:46.749 [2024-12-09 17:38:12.965725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-12-09 17:38:12.965757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-12-09 17:38:12.965954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-12-09 17:38:12.965986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-12-09 17:38:12.966181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-12-09 17:38:12.966216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-12-09 17:38:12.966488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-12-09 17:38:12.966521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-12-09 17:38:12.966723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-12-09 17:38:12.966757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-12-09 17:38:12.966935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-12-09 17:38:12.966968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-12-09 17:38:12.967148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-12-09 17:38:12.967195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-12-09 17:38:12.967454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-12-09 17:38:12.967487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-12-09 17:38:12.967666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-12-09 17:38:12.967697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-12-09 17:38:12.968003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-12-09 17:38:12.968037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-12-09 17:38:12.968309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-12-09 17:38:12.968343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-12-09 17:38:12.968597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-12-09 17:38:12.968628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.968811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.968845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.969046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.969079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.969357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.969393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.969619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.969651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.969899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.969930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.970119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.970151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.970412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.970445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.970642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.970673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.970882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.970913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.971111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.971143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.971452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.971485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.971681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.971713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.971966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.971997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.972202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.972236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.972416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.972447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.972592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.972624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.972815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.972846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.972973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.973004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.973204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.973238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.973441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.973472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.973588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.973619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.973933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.973965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.974141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.974182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.974398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.974429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.974637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.974675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.974954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.974985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.975165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.975208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.975402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.975434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.975622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.975652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.975777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.975808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.975937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.975968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.976179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.976211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.976484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.976515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.976788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.976819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.977021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.977052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.977303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.977337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.977527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.977558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.977756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.977787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.978008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-12-09 17:38:12.978040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-12-09 17:38:12.978357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.978390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.978591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.978623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.978884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.978915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.979164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.979205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.979329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.979360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.979561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.979592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.979775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.979806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.980081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.980112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.980403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.980435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.980712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.980744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.981036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.981067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.981342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.981376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.981572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.981609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.981862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.981894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.982191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.982224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.982495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.982530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.982814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.982846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.983149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.983189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.983448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.983480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.983698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.983730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.983981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.984013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.984287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.984321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.984602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.984633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.984900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.984932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.985267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.985302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.985575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.985607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.985896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.985928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.986146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.986187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.986330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.986361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.986546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.986577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.986778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.986808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.987079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.987111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.987370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.987402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.987649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.987680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.987932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.987963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.988231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.988264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.988573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.988605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-12-09 17:38:12.988880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-12-09 17:38:12.988912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.989207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.989241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.989517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.989555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.989761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.989793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.989984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.990015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.990287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.990321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.990620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.990651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.990921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.990952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.991065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.991095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.991221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.991253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.991430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.991462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.991656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.991687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.991964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.991995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.992281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.992314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.992615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.992647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.992892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.992923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.993217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.993252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.993525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.993557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.993767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.993798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.994065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.994096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.994356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.994391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.994584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.994615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.994893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.994924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.995127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.995157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.995368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.995400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.995623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.995654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.995853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.995884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.996151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.996193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.996444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.996476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.996771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.996803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.997077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.997109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.997397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.997430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.997652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.997683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.997961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.997993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.998248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.998282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.998471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.998502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.998682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.998713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.998987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.999018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.999218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.999251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-12-09 17:38:12.999507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-12-09 17:38:12.999538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:12.999833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:12.999864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.000137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.000176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.000358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.000390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.000666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.000698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.000967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.000999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.001122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.001153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.001440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.001472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.001731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.001763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.002016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.002047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.002294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.002329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.002581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.002612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.002788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.002819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.003066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.003097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.003349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.003383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.003580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.003612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.003884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.003915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.004191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.004225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.004524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.004556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.004822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.004854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.005050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.005081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.005296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.005329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.005579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.005611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.005880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.005911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.006162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.006208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.006387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.006419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.006698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.006729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.006931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.006963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.007157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.007202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.007476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.007507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.007756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.007789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.008006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.008044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.008224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.008258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.008527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.008558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.008851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.008882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.009086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.009118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.009349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.009382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.009665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.009696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.009958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.009990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.010288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-12-09 17:38:13.010320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-12-09 17:38:13.010619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.010650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.010954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.010985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.011250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.011283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.011550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.011581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.011883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.011914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.012188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.012223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.012516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.012547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.012814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.012845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.013146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.013196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.013468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.013501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.013804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.013834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.014099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.014130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.014397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.014431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.014733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.014764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.015026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.015057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.015357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.015391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.015663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.015694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.015971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.016003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.016250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.016289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.016552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.016583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.016774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.016805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.016988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.017019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.017295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.017328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.017602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.017634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.017927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.017957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.018235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.018268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.018457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.018488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.018625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.018655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.018925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.018957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.019244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.019279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.019561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.019593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.019705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.019736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.019961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.019993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.020267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-12-09 17:38:13.020300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-12-09 17:38:13.020589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.020621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.020818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.020849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.021099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.021130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.021437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.021470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.021667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.021698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.021955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.021986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.022282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.022316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.022530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.022562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.022775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.022806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.022985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.023017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.023293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.023326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.023644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.023675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.023932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.023964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.024163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.024206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.024475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.024506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.024807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.024838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.024949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.024980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.025250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.025283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.025475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.025506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.025776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.025808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.026087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.026118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.026253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.026286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.026479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.026509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.026721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.026752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.027020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.027051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.027349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.027384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.027644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.027675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.027894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.027925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.028123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.028154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.028435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.028467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.028648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.028679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.028866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.028897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.029102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.029133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.029416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.029450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.029666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.029698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.029995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.030026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.030319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.030354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.030629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.030660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.030910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.030941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-12-09 17:38:13.031243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-12-09 17:38:13.031277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.031571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.031603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.031881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.031913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.032185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.032218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.032504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.032536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.032814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.032845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.033129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.033160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.033374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.033406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.033520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.033552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.033823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.033854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.034049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.034081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.034278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.034311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.034510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.034543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.034816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.034854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.035112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.035145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.035449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.035481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.035743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.035776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.036068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.036100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.036395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.036429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.036737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.036769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.037025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.037056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.037199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.037233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.037430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.037461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.037736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.037767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.038061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.038092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.038317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.038351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.038530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.038561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.038715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.038747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.038947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.038978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.039250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.039283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.039497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.039528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.039776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.039807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.040001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.040032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.040318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.040352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.040619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.040651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.040935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.040965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.041226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.041260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.041561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.041593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.041883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.041914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.042111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-12-09 17:38:13.042142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-12-09 17:38:13.042404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.042441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.042684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.042716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.042965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.042996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.043266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.043300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.043580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.043612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.043888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.043919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.044190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.044223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.044438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.044469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.044725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.044756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.045009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.045040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.045335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.045369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.045641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.045673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.045875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.045907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.046187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.046220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.046508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.046539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.046766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.046797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.047027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.047058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.047272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.047305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.047524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.047555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.047828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.047859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.048154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.048198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.048405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.048436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.048689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.048720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.048985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.049015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.049319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.049353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.049556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.049587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.049811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.049842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.050138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.050185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.050443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.050475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.050725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.050756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.050897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.050928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.051203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.051237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.051534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.051565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.051744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.051775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.051955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.051987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.052202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.052235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.052384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.052415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.052634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.052666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.052847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.052878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-12-09 17:38:13.053153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-12-09 17:38:13.053203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.053333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.053365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.053640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.053672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.053942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.053974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.054255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.054289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.054575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.054606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.054715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.054747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.054996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.055026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.055341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.055375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.055618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.055649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.055925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.055956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.056203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.056236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.056494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.056525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.056668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.056699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.056973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.057005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.057199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.057233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.057431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.057463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.057678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.057709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.057907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.057939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.058079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.058110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.058348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.058380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.058693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.058724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.059005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.059036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.059241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.059275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.059478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.059509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.059761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.059793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.060048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.060078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.060287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.060320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.060591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.060623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.060812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.060844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.061050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.061081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.061356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.061390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.061668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.061699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-12-09 17:38:13.061983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-12-09 17:38:13.062015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.062211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.062243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.062384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.062415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.062595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.062625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.062813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.062845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.062963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.062994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.063263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.063296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.063520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.063553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.063778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.063809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.064059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.064091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.064385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.064418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.064632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.064662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.064964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.064996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.065260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.065293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.065477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.065509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.065688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.065718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.065916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.065946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.066182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.066215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.066402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.066433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.066634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.066665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.066914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.066945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.067192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.067226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.067529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.067560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.067834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.067872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.068082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.068114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.068424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.068457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.068677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.068708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.068978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.069009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.069190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.069223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.069495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.069527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.069723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.069754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.069946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.069978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.070154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.070195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.070334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.070365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.070566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.070598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.070853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.070884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.071158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.071211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.071504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.071536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.071737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.071767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.072041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-12-09 17:38:13.072073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-12-09 17:38:13.072282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.072316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.072612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.072643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.072933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.072965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.073161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.073204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.073404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.073436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.073707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.073738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.074006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.074037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.074235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.074268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.074516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.074549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.074675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.074706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.074886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.074929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.075204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.075238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.075501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.075532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.075720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.075752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.075926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.075958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.076242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.076276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.076560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.076592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.076874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.076907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.077120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.077152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.077440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.077472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.077756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.077788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.077983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.078014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.078273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.078306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.078506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.078537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.078745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.078777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.078907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.078937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.079120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.079151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.079424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.079455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.079679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.079711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.079961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.079994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.080277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.080310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.080592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.080623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.080917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.080948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.081201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.081233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.081501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.081532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.081729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.081761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.082025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.082056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.082308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.082342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.082549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.082581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.082847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-12-09 17:38:13.082878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-12-09 17:38:13.083128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.083159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.083295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.083327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.083526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.083557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.083762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.083793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.084016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.084047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.084240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.084274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.084534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.084565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.084861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.084892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.085160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.085201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.085451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.085482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.085781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.085812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.086016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.086048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.086344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.086377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.086603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.086634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.086766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.086797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.086994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.087025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.087230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.087262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.087459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.087491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.087785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.087817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.088094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.088124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.088386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.088419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.088634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.088665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.088929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.088961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.089188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.089221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.089470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.089502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.089753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.089784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.089966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.089997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.090124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.090155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.090444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.090476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.090752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.090787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.091049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.091080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.091381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.091416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.091636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.091667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.091869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.091900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.092151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.092192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.092393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.092426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.092682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.092713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.092838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.092870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.093145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.093193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-12-09 17:38:13.093470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-12-09 17:38:13.093502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.093777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.093808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.094082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.094113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.094344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.094377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.094527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.094560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.094763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.094797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.095030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.095061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.095332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.095365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.095650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.095682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.095962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.095993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.096246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.096279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.096577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.096608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.096824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.096855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.097129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.097160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.097376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.097407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.097533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.097564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.097841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.097872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.098051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.098082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.098361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.098394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.098596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.098627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.098845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.098876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.099128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.099159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.099471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.099503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.099686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.099718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.099896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.099928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.100206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.100239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.100516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.100554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.100773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.100804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.101079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.101109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.101327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.101362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.101639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.101672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.101951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.101984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.102268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.102301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.102605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.102637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.102785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.102816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.102953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.102984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.103234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.103268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.103580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.103612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.103757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.103787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.104034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.104066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-12-09 17:38:13.104270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-12-09 17:38:13.104304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.104579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.104611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.104872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.104903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.105102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.105132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.105401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.105434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.105627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.105658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.105805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.105836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.106103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.106134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.106278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.106312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.106511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.106543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.106689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.106720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.106930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.106963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.107141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.107186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.107463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.107501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.107666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.107699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.107910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.107943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.108235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.108270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.108466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.108497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.108622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.108653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.108924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.108955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.109156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.109199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.109398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.109430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.109628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.109660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.109875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.109906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.110188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.110221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.110445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.110476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.110695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.110726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.110933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.110965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.111148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.111198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.111452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.111483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.111699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.111731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.112038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.112069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.112272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.112306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.112522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.112553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.112754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.112785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-12-09 17:38:13.113073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-12-09 17:38:13.113105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.113384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.113417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.113549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.113579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.113770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.113801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.114004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.114035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.114304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.114337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.114466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.114498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.114743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.114775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.114954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.114984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.115188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.115221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.115498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.115531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.115684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.115716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.115976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.116008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.116192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.116226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.116448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.116480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.116736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.116767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.116952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.116988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.117266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.117301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.117524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.117556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.117846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.117879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.118181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.118213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.118412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.118443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.118697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.118729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.118977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.119008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.119141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.119187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.119327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.119358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.119575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.119606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.119867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.119898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.120081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.120112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.120406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.120439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.120648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.120680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.121006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.121037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.121309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.121342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.121541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.121573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.121705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.121737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.121950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.121982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.122277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.122312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.122576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.122607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.122832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.122863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.123043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-12-09 17:38:13.123075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-12-09 17:38:13.123256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.123289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.123718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.123753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.124019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.124054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.124190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.124225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.124501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.124534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.124731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.124763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.124967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.125007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.125202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.125237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.125437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.125468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.125740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.125770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.126057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.126088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.126277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.126309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.126548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.126579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.126734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.126765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.127043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.127074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.127303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.127335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.127558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.127590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.127860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.127891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.128113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.128144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.128362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.128394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.128585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.128617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.128881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.128912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.129189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.129223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.129541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.129572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.129781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.129813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.130012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.130043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.130294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.130329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.130485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.130516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.130664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.130696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.130892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.130923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.131179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.131212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.131461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.131494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.131645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.131676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.131801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.131838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.132027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.132059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.132320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.132354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.132470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.132501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.132686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.132717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.133003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.133034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-12-09 17:38:13.133273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-12-09 17:38:13.133306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.133557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.133589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.133906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.133937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.134208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.134242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.134536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.134568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.134771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.134802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.135056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.135087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.135311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.135344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.135648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.135680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.135877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.135909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.136202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.136234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.136509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.136541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.136745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.136776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.136971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.137002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.137191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.137224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.137501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.137532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.137746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.137777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.138046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.138078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.138369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.138402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.138677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.138709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.138989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.139020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.139292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.139325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.139522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.139553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.139855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.139887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.140136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.140176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.140331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.140363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.140566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.140598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.140799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.140830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.141103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.141135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.141379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.141413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.141616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.141647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.141869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.141900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.142092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.142122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.142394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.142428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.142537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.142569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.142814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.142890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.143123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.143160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.143450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.143483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.143613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.143645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.143868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.143900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-12-09 17:38:13.144186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-12-09 17:38:13.144220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.144422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.144454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.144684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.144717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.145013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.145045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.145245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.145279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.145419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.145451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.145640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.145672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.145942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.145973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.146277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.146321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.146601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.146634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.146915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.146947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.147088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.147120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.147407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.147441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.147617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.147649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.147853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.147885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.148074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.148108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.148328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.148363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.148559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.148590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.148860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.148892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.149201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.149235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.149424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.149455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.149705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.149738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.149940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.149973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.150252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.150286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.150441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.150473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.150672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.150705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.150892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.150924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.151200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.151235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.151435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.151468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.151720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.151752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.152055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.152088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.152291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.152328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.152603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.152634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.152937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.152971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.153115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-12-09 17:38:13.153148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-12-09 17:38:13.153354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.153388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.153677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.153710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.153984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.154019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.154304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.154339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.154615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.154647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.154941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.154973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.155265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.155301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.155542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.155574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.155790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.155821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.156020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.156051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.156307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.156341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.156570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.156601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.156804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.156837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.157136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.157182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.157458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.157491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.157692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.157724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.157984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.158017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.158222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.158258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.158523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.158555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.158768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.158800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.159056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.159088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.159281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.159316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.159519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.159551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.159825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.159858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.159999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.160031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.160287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.160321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.160598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.160631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.161009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.161042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.161245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.161278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.161463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.161496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.161691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.161724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.161922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.161955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.162135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.162177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.162450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.162482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.162766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.162800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.163082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.163115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.163405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.163439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.163632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.163664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.163890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.163923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-12-09 17:38:13.164180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-12-09 17:38:13.164214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.164475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.164508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.164789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.164823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.165050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.165085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.165338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.165372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.165566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.165599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.165783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.165816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.165937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.165969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.166251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.166287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.166416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.166449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.166630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.166665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.166938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.166970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.167155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.167196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.167457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.167491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.167767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.167806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.168028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.168060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.168347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.168380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.168630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.168663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.168925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.168957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.169207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.169241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.169537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.169574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.169825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.169857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.170142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.170183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.170373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.170405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.170664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.170697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.170950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.170983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.171204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.171239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.171441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.171475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.171730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.171763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.171981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.172013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.172199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.172234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.172433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.172465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.172740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.172774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.172974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.173006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.173220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.173255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.173451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.173483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.173782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.173814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.174105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.174137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.174421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.174454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-12-09 17:38:13.174690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-12-09 17:38:13.174723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.174936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.174968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.175180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.175220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.175355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.175388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.175658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.175691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.175874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.175906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.176023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.176055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.176297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.176331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.176464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.176497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.176697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.176729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.176908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.176939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.177130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.177161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.177357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.177392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.177673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.177706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.177982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.178014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.178151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.178194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.178423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.178455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.178650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.178681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.178803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.178836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.179111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.179143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.179336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.179369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.179508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.179542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.179791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.179824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.179964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.179996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.180195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.180230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.180370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.180403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.180608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.180640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.180843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.180875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.181058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.181090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.181303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.181337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.181467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.181499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.181681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.181713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.181918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.181950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.182094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.182126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.182416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.182449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.182653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.182685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.182817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.182850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.183045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.183078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.183257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.183291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-12-09 17:38:13.183546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-12-09 17:38:13.183577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.183775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.183808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.184087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.184119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.184348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.184388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.184641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.184674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.184889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.184922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.185205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.185240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.185515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.185546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.185753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.185785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.186085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.186117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.186416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.186449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.186760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.186791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.186998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.187031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.187349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.187382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.187675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.187707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.188001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.188033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.188331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.188365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.188572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.188605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.188826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.188859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.189065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.189098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.189348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.189383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.189648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.189680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.189978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.190010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.190311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.190345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.190564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.190595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.190854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.190886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.191105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.191137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.191421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.191453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.191655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.191687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.191886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.191918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.192226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.192260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.192540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.192572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.192828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.192860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.193163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.193229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.193428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.193460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.193715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-12-09 17:38:13.193747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-12-09 17:38:13.193996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.194030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.194335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.194369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.194641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.194673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.194870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.194902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.195117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.195149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.195410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.195443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.195737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.195769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.196008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.196048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.196246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.196279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.196476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.196510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.196786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.196818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.196996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.197027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.197276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.197309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.197608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.197640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.197911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.197943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.198262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.198296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.198565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.198598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.198878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.198912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.199194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.199228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.199440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.199472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.199705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.199737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.200023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.200056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.200330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.200364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.200654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.200686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.200865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.200898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.201100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.201133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.201341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.201375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.201608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.201640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.201834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.201866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.201983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.202017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.202212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.202247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.202461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.202494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.202698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.202732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.202913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.202945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.203086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.203119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.203269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.203301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.203514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.203547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.203803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.203835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-12-09 17:38:13.204141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-12-09 17:38:13.204183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.204434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.204469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.204771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.204803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.205012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.205045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.205263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.205296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.205478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.205510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.205789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.205821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.206100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.206132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.206417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.206452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.206648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.206686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.206960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.206993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.207127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.207161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.207358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.207391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.207661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.207694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.207876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.207908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.208191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.208225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.208507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.208540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.208730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.208763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.209040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.209071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.209371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.209404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.209602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.209635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.209910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.209941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.210234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.210268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.210542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.210575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.210830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.210864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.211079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.211111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.211378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.211412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.211706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.211739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.211963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.211995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.212255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.212290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.212580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.212615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.212823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.212856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.213058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.213090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.213311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.213345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.213626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.213658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.213875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.213907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.214107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.214140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.214424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.214457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.214669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.214700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.214907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-12-09 17:38:13.214940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-12-09 17:38:13.215224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.215258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.215473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.215507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.215688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.215721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.215940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.215973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.216231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.216265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.216511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.216543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.216765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.216797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.217069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.217101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.217292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.217325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.217499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.217538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.217813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.217845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.218154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.218200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.218404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.218437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.218583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.218616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.218821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.218854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.219072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.219104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.219385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.219419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.219696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.219729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.219982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.220014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.220295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.220330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.220529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.220561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.220861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.220894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.221088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.221121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.221414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.221450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.221746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.221780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.222049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.222081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.222325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.222359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.222550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.222583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.222847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.222880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.223062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.223094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.223276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.223311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.223436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.223469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.223733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.223765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.223961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.223993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.224290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.224325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.224580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.224612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.224814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.224848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.225062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.225095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.225346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-12-09 17:38:13.225380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-12-09 17:38:13.225598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.225630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.225825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.225858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.226114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.226147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.226361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.226395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.226617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.226649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.226845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.226877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.227147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.227205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.227478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.227512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.227798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.227831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.228108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.228141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.228280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.228320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.228451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.228484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.228676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.228708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.228897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.228929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.229212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.229247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.229463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.229495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.229769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.229802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.230081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.230113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.230334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.230369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.230565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.230597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.230792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.230825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.231100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.231133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.231345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.231379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.231595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.231628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.231930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.231964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.232200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.232234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.232440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.232473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.232664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.232697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.232944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.232977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.233232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.233267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.233565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.233598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.233881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.233914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.234116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.234148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.234428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.234462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.234741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.234773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.234975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.235007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.235204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.235239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.235527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.235560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.235762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.235794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.775 qpair failed and we were unable to recover it. 00:27:46.775 [2024-12-09 17:38:13.236017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.775 [2024-12-09 17:38:13.236049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.236231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.236265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.236447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.236480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.236664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.236697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.236882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.236915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.237112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.237144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.237410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.237443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.237582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.237615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.237840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.237873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.238000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.238033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.238244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.238278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.238467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.238507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.238688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.238720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.238833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.238866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.239048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.239082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.239273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.239307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.239516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.239550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.239740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.239774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.239901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.239934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.240212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.240246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.240367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.240401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.240545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.240578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.240775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.240808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.241010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.241042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.241148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.241208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.241474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.241507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.241632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.241665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.241864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.241898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.242148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.242194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.242406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.242440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.242629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.242663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.242858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.242891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.243207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.243242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.243443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.243477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.243733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.243765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.776 qpair failed and we were unable to recover it. 00:27:46.776 [2024-12-09 17:38:13.243962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.776 [2024-12-09 17:38:13.243994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.244120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.244152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.244374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.244408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.244541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.244574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.244752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.244785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.245062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.245095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.245301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.245336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.245557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.245590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.245842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.245874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.245998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.246032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.246260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.246294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.246491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.246524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.246737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.246769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.247049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.247081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.247386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.247421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.247613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.247645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.247838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.247878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.248138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.248181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.248386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.248419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.248560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.248593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.248729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.248762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.248977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.249010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.249279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.249314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.249513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.249546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.249799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.249832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.250113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.250145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.250347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.250381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.250593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.250626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.250825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.250858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.251080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.251113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.251330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.251378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.251618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.251650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.251850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.251883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.252020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.252053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.252259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.252292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.252494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.252526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.252730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.252764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.252977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.253009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.253135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.253176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.777 [2024-12-09 17:38:13.253429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.777 [2024-12-09 17:38:13.253462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.777 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.253592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.253623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.253824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.253855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.254062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.254094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.254297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.254331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.254479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.254511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.254768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.254800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.255093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.255125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.256115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.256261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.256742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.256826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.257138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.257195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.257354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.257386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.257591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.257623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.257892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.257924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.258125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.258156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.258385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.258419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.258617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.258649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.258964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.258995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.259219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.259254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.259444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.259476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.259655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.259686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.259883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.259914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.260256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.260290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.260510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.260541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.260665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.260696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.260890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.260923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.261118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.261149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.261380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.261413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.261662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.261694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.261941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.261972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.262230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.262264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.262595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.262627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.262921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.262953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.263227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.263260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.263551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.263583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.263784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.263816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.264015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.264047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.264268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.264301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.264516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.264548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.264745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.778 [2024-12-09 17:38:13.264778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.778 qpair failed and we were unable to recover it. 00:27:46.778 [2024-12-09 17:38:13.265038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.265070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.265275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.265308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.265502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.265535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.265788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.265820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.266116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.266147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.266435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.266468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.266766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.266798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.267013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.267045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.267317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.267350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.267562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.267594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.267746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.267779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.268058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.268088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.268212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.268246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.268445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.268477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.268692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.268724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.268925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.268957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.269227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.269260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.269440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.269472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.269749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.269788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.270055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.270087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.270346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.270380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.270577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.270609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.270763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.270794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.270989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.271021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.271320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.271353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.271556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.271588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.271726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.271758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.271952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.271984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.272189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.272223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:46.779 [2024-12-09 17:38:13.272484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.779 [2024-12-09 17:38:13.272516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:46.779 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.272811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.272842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.273031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.273063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.273270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.273304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.273506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.273539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.273804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.273837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.274123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.274153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.274459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.274492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.274683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.274715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.274908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.274939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.275217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.275251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.275430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.275462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.275731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.275763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.276055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.276087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.276409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.276443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.276721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.276752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.277038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.277076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.277361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.277395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.277592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.277624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.277882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.277914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.278037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.278069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.278357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.278391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.278657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.278689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.278890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.278921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.279188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.279221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.279472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.279504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.279803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-12-09 17:38:13.279834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-12-09 17:38:13.280106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.280137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.280432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.280466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.280738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.280770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.281033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.281065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.281214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.281248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.281463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.281494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.281746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.281778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.282049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.282080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.282293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.282327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.282601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.282633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.282922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.282954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.283227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.283259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.283552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.283584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.283782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.283814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.283977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.284009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.284220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.284254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.284470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.284507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.284709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.284740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.284980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.285012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.285312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.285347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.285641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.285673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.285890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.285922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.286123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.286155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.286436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.286471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.286744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.286777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.287070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.287101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.287428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.287462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.287620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.287652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.287924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.287955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.288207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.288241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.288493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.288525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.288775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.288808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.289028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.289060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.289310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.289343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.289540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.289572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.289830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.289862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.290040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.290072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.290343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.290376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.290673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.290705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-12-09 17:38:13.290919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-12-09 17:38:13.290950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.291141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.291185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.291451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.291483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.291683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.291714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.291857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.291889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.292177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.292212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.292462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.292494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.292707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.292739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.293002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.293033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.293314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.293348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.293631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.293663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.293868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.293899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.294184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.294217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.294499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.294534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.294718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.294750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.294945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.294976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.295252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.295285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.295435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.295466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.295706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.295738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.296015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.296046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.296243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.296275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.296528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.296560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.296703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.296734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.296870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.296901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.297082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.297113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.297246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.297278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.297495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.297527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.297709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.297740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.298020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.298051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.298234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.298268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.298543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.298574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.298765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.298796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.299061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.299092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.299349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.299382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.299577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.299609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.299912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.299943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.300221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.300253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.300529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.300560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.300773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.300803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.301000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-12-09 17:38:13.301032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-12-09 17:38:13.301232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.301264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.301519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.301549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.301676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.301707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.301988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.302020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.302199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.302232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.302364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.302402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.302651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.302683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.302982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.303014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.303307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.303340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.303530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.303561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.303818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.303850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.304137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.304178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.304367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.304398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.304664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.304695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.304917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.304949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.305142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.305181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.305386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.305418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.305566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.305597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.305851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.305882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.306108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.306139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.306394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.306429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.306722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.306754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.307001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.307033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.307299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.307333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.307627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.307659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.307926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.307957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.308153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.308195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.308395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.308427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.308675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.308707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.308979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.309011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.309203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.309236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.309438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.309470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.309684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.309723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.309909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.309940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.310068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.310100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.310373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.310406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.310553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.310586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.310877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.310909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.311210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.311243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-12-09 17:38:13.311382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-12-09 17:38:13.311413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.311610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.311642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.311840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.311872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.312058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.312090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.312286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.312320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.312450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.312482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.312662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.312695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.312915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.312947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.313198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.313232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.313510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.313542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.313794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.313826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.314086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.314118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.314424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.314457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.314717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.314749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.315050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.315083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.315351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.315384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.315598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.315630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.315892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.315924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.316218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.316251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.316449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.316481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.316691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.316723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.316989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.317021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.317327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.317361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.317607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.317639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.317840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.317872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.318149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.318190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.318491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.318523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.318774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.318806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.319119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.319151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.319436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.319468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.319670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.319702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.319956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.319988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.320202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.320234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.320491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.320523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.320802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-12-09 17:38:13.320835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-12-09 17:38:13.321086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.321117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.321266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.321300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.321500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.321532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.321800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.321832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.322086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.322117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.322431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.322466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.322742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.322772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.322984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.323016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.323236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.323271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.323543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.323575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.323838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.323869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.324129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.324160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.324363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.324395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.324660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.324691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.324940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.324972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.325226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.325261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.325505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.325537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.325744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.325775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.325895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.325927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.326187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.326220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.326495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.326527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.326744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.326776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.327055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.327087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.327284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.327317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.327615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.327647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.327916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.327949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.328247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.328286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.328555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.328587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.328865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.328898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.329119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.329150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.329411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.329445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.329703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.329735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.329933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.329964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.330205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.330240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.330388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.330420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.330640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.330672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.330868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.330899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.331153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.331197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.331414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.331446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-12-09 17:38:13.331583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-12-09 17:38:13.331615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.331730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.331763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.332031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.332063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.332332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.332365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.332651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.332682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.332879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.332913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.333189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.333224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.333428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.333460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.333658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.333692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.333873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.333906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.334186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.334222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.334441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.334475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.334699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.334732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.334931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.334965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.335216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.335257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.335444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.335476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.335749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.335782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.335916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.335949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.336199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.336235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.336532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.336565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.336835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.336868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.337116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.337149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.337361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.337395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.337528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.337563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.337768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.337801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.338107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.338142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.338456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.338491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.338743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.338777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.339041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.339076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.339285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.339321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.339522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.339555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.339687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.339721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.339992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.340026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.340302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.340338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.340623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.340657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.340928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.340962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.341226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.341261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.341558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.341592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.341804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.341838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.342112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-12-09 17:38:13.342145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-12-09 17:38:13.342366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.342401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.342597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.342637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.342830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.342865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.342987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.343021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.343240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.343276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.343550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.343585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.343866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.343899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.344154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.344201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.344495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.344529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.344805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.344839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.345046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.345080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.345358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.345394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.345672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.345707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.345988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.346022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.346251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.346286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.346546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.346582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.346707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.346739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.347007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.347042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.347319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.347355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.347581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.347615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.347936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.347969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.348244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.348279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.348567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.348600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.348872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.348906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.349189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.349225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.349419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.349454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.349643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.349677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.349892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.349926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.350216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.350252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.350543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.350577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.350837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.350870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.351093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.351128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.351330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.351365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.351661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.351695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.351955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.351989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.352247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.352283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.352486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.352520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.352798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.352831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.352963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.352997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-12-09 17:38:13.353272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-12-09 17:38:13.353307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.353558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.353592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.353778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.353812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.354102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.354136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.354407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.354441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.354718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.354751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.355058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.355092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.355352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.355387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.355683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.355717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.355922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.355956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.356158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.356203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.356405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.356439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.356656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.356690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.356986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.357020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.357311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.357345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.357494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.357529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.357719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.357754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.358075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.358110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.358434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.358471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.358611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.358645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.358848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.358882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.359066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.359099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.359246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.359282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.359556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.359590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.359823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.359857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.360136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.360178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.360454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.360488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.360705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.360740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.360922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.360955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.361189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.361225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.361417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.361457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.361757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.361790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.362072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.362106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.362424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.362459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.362652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.362685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.362878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.362912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.363133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.363181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.363379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.363413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.363632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.363666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.363916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.363951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-12-09 17:38:13.364218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-12-09 17:38:13.364253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.364439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.364472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.364667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.364702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.364917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.364952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.365208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.365244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.365446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.365479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.365754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.365787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.366055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.366089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.366388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.366423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.366687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.366721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.366923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.366958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.367177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.367212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.367486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.367520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.367797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.367830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.368057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.368090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.368369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.368405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.368684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.368718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.368998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.369040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.369226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.369261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.369460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.369494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.369677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.369711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.369904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.369937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.370119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.370153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.370367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.370402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.370679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.370712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.370991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.371026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.371212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.371247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.371503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.371537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.371785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.371819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.371949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.371983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.372165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.372211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.372408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.372442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.372646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.372680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.372935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.372969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.373102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-12-09 17:38:13.373135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-12-09 17:38:13.373331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.373367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.373641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.373675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.373893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.373927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.374211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.374246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.374451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.374485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.374757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.374791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.375066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.375100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.375229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.375264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.375539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.375572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.375840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.375874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.376165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.376212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.376399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.376433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.376704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.376737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.376937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.376971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.377151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.377195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.377395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.377428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.377643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.377677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.377805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.377840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.378164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.378217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.378497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.378530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.378725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.378759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.379031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.379064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.379356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.379392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.379662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.379697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.379899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.379933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.380155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.380199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.380480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.380515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.380785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.380817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.381106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.381139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.381341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.381377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.381631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.381665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2061097 Killed "${NVMF_APP[@]}" "$@" 00:27:47.065 [2024-12-09 17:38:13.381942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.381977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.382158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.382202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.382424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.382459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:47.065 [2024-12-09 17:38:13.382734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.382769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.382960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:47.065 [2024-12-09 17:38:13.382995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-12-09 17:38:13.383209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.383245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:47.065 [2024-12-09 17:38:13.383521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-12-09 17:38:13.383556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:47.065 [2024-12-09 17:38:13.383854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.383889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.066 [2024-12-09 17:38:13.384154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.384198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.384399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.384434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.384629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.384663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.384940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.384974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.385177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.385212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.385506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.385540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.385797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.385833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.386032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.386065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.386247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.386283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.386492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.386526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.386782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.386816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.386939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.386973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.387247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.387281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.387532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.387564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.387768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.387799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.388102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.388135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.388403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.388438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.388735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.388769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.389059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.389093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.389246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.389282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.389561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.389594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.389781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.389815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.390093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.390128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.390425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.390460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.390689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.390725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.390997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.391031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2061801 00:27:47.066 [2024-12-09 17:38:13.391319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.391355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2061801 00:27:47.066 [2024-12-09 17:38:13.391628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.391664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:47.066 [2024-12-09 17:38:13.391920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.391955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2061801 ']' 00:27:47.066 [2024-12-09 17:38:13.392087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.392122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.392388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.392424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-12-09 17:38:13.392613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.392647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.066 [2024-12-09 17:38:13.392925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.392960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.066 [2024-12-09 17:38:13.393149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-12-09 17:38:13.393195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.066 [2024-12-09 17:38:13.393403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.393438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.067 [2024-12-09 17:38:13.393690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.393724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.394018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.394052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.394249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.394286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.394548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.394582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.394868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.394902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.395143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.395190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.395382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.395418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.395699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.395733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.395923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.395957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.396157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.396210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.396424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.396459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.396732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.396766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.397002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.397035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.397239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.397276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.397553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.397588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.397732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.397767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.398085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.398119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.398431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.398467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.398670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.398705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.398944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.398978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.399299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.399335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.399612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.399646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.399838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.399873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.400076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.400113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.400429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.400465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.400734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.400770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.400952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.400986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.401264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.401302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.401509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.401543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.401823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.401857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.402007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.402041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.402230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.402267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.402424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.402457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.402590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.402624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.402848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.402883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.403132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.403178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.403376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.403411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.403675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.403711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-12-09 17:38:13.403974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-12-09 17:38:13.404008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.404143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.404188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.404461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.404497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.404680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.404714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.404863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.404897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.405181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.405217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.405480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.405513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.405709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.405742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.406034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.406070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.406322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.406358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.406633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.406667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.406879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.406913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.407103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.407138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.407288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.407324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.407616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.407650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.407836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.407869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.408121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.408155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.408284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.408320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.408519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.408552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.408825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.408861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.409042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.409075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.409262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.409298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.409576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.409612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.409843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.409878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.410074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.410108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.410393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.410432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.410551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.410585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.410797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.410831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.411031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.411065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.411285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.411323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.411621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.411655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.411915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.411950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.412251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.412288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.412539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.412572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.412796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.412830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.412959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-12-09 17:38:13.412994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-12-09 17:38:13.413118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.413152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.413361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.413396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.413670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.413704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.413908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.413947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.414246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.414284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.414545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.414579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.414832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.414866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.415050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.415085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.415276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.415312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.415589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.415624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.415879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.415913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.416103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.416137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.416334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.416370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.416503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.416536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.416755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.416792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.416980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.417014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.417291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.417327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.417550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.417584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.417787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.417822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.418076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.418109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.418403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.418440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.418707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.418741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.419007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.419041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.419339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.419380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.419599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.419633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.419768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.419802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.420083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.420117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.420385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.420420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.420572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.420605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.420801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.420835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.421022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.421063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.421341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.421376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.421569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.421603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.421797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.421831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.422126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.422160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.422455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.422490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.422758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.422792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.423038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.423071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.423353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.423389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-12-09 17:38:13.423571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-12-09 17:38:13.423604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.423856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.423890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.424076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.424110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.424331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.424365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.424656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.424690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.424877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.424912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.425122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.425155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.425361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.425396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.425654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.425688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.425984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.426017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.426221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.426256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.426534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.426568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.426869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.426903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.427100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.427135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.427496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.427580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.427882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.427924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.428121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.428156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.428393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.428429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.428684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.428728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.428930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.428964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.429231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.429270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.429476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.429510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.429767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.429802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.429989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.430023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.430207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.430242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.430529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.430564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.430834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.430868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.431088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.431122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.431416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.431452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.431641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.431676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.431868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.431902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.432020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.432050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.432253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.432290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.432441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.432476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.432778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.432812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.433012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.433046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.433397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.433444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.433629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.433664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.433929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.433963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.434241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-12-09 17:38:13.434278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-12-09 17:38:13.434561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.434595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.434855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.434889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.435113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.435148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.435371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.435406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.435603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.435637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.435725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f340f0 (9): Bad file descriptor 00:27:47.071 [2024-12-09 17:38:13.436013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.436050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.436329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.436364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.436558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.436591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.436851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.436890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.437163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.437209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.437424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.437458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.437745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.437780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.438056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.438090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.438375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.438410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.438658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.438692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.438942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.438975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.439237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.439273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.439571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.439605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.439821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.439903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.440226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.440303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.440634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.440673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.440935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.440969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.441194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.441229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.441522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.441557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.441848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.441882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.442081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.442115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.442318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.442354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.442376] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:27:47.071 [2024-12-09 17:38:13.442422] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.071 [2024-12-09 17:38:13.442626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.442658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.442952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.442983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.443258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.443293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.443583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.443623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.443826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.443861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.444058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.444092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.444233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.444269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.444412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.444446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.444722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.444756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.444960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.444994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.445274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-12-09 17:38:13.445310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-12-09 17:38:13.445514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.445548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.445676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.445711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.445918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.445952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.446090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.446126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.446374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.446410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.446629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.446665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.446886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.446922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.447120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.447153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.447419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.447454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.447724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.447757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.448065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.448098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.448395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.448431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.448575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.448609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.448792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.448825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.449095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.449131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.449331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.449366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.449637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.449670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.449810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.449844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.449977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.450012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.450205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.450241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.450451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.450484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.450662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.450696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.450942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.450976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.451186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.451224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.451425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.451461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.451708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.451742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.452037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.452071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.452337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.452372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.452660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.452693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.452987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.453022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.453239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.453274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.453536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.453571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.453758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.453797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.453940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.453973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.454190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.454225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.454477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.454512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.454801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.454836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-12-09 17:38:13.455055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-12-09 17:38:13.455090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.455337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.455376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.455568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.455603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.455731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.455764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.455962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.455997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.456187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.456223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.456436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.456469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.456683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.456717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.456990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.457024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.457310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.457345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.457485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.457518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.457706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.457758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.457950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.457983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.458249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.458283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.458412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.458448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.458624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.458659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.458933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.458971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.459238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.459275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.459476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.459509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.459641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.459677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.459950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.459982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.460298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.460334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.460614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.460649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.460950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.460983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.461285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.461319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.461506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.461541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.461737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.461771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.462063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.462096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.462277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.462312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.462562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.462596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.462826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.462860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.462982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.463016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.463291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.463326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.463530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.463563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.463835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.463869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.464145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-12-09 17:38:13.464203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-12-09 17:38:13.464390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.464424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.464648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.464681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.464829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.464862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.465045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.465079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.465323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.465358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.465485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.465519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.465729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.465763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.465982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.466016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.466289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.466326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.466514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.466548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.466678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.466711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.466914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.466948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.467135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.467180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.467435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.467469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.467741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.467773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.467898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.467933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.468208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.468242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.468512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.468547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.468804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.468838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.469091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.469123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.469316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.469352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.469642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.469675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.469913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.469946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.470219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.470253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.470445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.470478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.470616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.470650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.470872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.470907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.471204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.471240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.471429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.471463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.471685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.471719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.472006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.472039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.472313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.472349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.472633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.472666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.472852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.472885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.473149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.473192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.473374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.473408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.473581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.473614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.473816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.473849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.473977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.474009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-12-09 17:38:13.474308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-12-09 17:38:13.474349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.474568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.474602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.474870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.474903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.475194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.475229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.475419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.475452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.475626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.475659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.475931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.475964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.476152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.476204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.476452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.476487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.476683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.476715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.476992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.477025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.477223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.477258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.477440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.477473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.477645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.477679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.477832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.477866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.478059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.478093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.478360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.478414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.478684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.478717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.478893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.478926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.479204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.479238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.479431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.479465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.479717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.479749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.480014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.480047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.480326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.480361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.480609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.480641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.480855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.480889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.481099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.481132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.481471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.481519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.481720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.481755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.481952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.481985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.482186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.482221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.482464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.482496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.482743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.482778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.483070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.483104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.483332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.483369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.483546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.483579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.483845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.483878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.484057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.484091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.484296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.484331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-12-09 17:38:13.484544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-12-09 17:38:13.484578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.484732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.484766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.484972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.485005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.485214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.485249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.485437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.485470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.485668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.485701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.485889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.485922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.486207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.486242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.486384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.486417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.486647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.486680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.486971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.487006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.487202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.487238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.487437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.487471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.487734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.487768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.488058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.488090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.488223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.488263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.488462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.488494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.488696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.488728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.488939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.488972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.489264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.489299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.489509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.489542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.489807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.489840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.490132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.490164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.490366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.490399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.490574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.490607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.490735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.490768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.491012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.491045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.491234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.491269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.491521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.491554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.491738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.491772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.492040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.492074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.492259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.492293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.492552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.492586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.492829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.492862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.492984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.493017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.493282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.493319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.493501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.493534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.493680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.493712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.493929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.493962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.494276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-12-09 17:38:13.494310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-12-09 17:38:13.494577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.494610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.494823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.494855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.495124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.495157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.495312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.495345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.495535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.495567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.495832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.495864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.496128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.496160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.496370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.496404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.496599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.496632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.496834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.496866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.497108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.497141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.497346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.497381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.497567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.497600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.497802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.497835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.497972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.498022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.498228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.498275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.498545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.498577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.498752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.498786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.498999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.499033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.499289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.499324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.499619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.499650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.499890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.499924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.500104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.500137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.500426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.500465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.500665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.500699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.500865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.500899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.501030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.501063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.501255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.501290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.501532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.501566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.501684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.501718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.501978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.502011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.502287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.502323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.502449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.502482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.502692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.502725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.503027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.503060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-12-09 17:38:13.503253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-12-09 17:38:13.503290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.503577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.503610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.503809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.503842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.504035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.504068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.504202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.504237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.504421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.504454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.504662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.504695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.504868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.504906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.505137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.505179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.505419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.505451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.505577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.505610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.505882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.505915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.506162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.506207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.506383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.506415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.506665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.506697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.506967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.507001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.507187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.507222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.507442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.507475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.507667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.507699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.507903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.507936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.508119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.508152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.508285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.508320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.508508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.508541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.508802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.508835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.509105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.509137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.509365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.509411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.509604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.509640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.509780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.509813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.510019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.510053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.510295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.510331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.510601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.510634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.510771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.510804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.511010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.511043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.511227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.511262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.511489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.511525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.511711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.511745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.511921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.511954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.512175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.512209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.512390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.512423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.512660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-12-09 17:38:13.512692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-12-09 17:38:13.512814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.512848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.513037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.513072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.513268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.513302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.513446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.513479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.513687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.513720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.513934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.513967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.514156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.514201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.514530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.514564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.514761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.514794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.515051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.515084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.515204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.515239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.515504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.515538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.515719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.515753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.516014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.516048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.516164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.516206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.516344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.516378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.516572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.516605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.516817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.516850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.516971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.517004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.517291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.517325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.517452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.517485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.517677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.517716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.517991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.518024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.518262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.518296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.518558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.518591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.518728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.518762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.518883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.518915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.519062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.519096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.519342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.519376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.519589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.519622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.519747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.519780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.520004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.520036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.520225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.520258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.520397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.520431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.520618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.520650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.520854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.520890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.521079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.521112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.521385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.521419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.521607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.521641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-12-09 17:38:13.521865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-12-09 17:38:13.521899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.522088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.522122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.522345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.522380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.522568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.522601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.522725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.522759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.522860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.522891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.523081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.523115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.523331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.523364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.523489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.523522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.523640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.523685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.523872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.523905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.524093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.524125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.524350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.524385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.524622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.524651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.080 [2024-12-09 17:38:13.524656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.524966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.525000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.525252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.525287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.525482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.525514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.525752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.525785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.525976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.526009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.526196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.526230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.526412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.526445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.526564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.526597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.526779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.526813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.527127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.527161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.527363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.527396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.527532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.527565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.527734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.527767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.527906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.527939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.528187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.528238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.528464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.528497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.528675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.528709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.528963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.528997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.529179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.529215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.529362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.529395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.529612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.529645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.529885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.529919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.530110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.530150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.530375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.530410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.530596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.530629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.530822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-12-09 17:38:13.530856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-12-09 17:38:13.531069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.531103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.531328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.531363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.531506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.531541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.531672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.531706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.531991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.532025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.532162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.532207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.532334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.532368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.532612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.532646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.532903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.532937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.533130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.533164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.533313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.533348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.533544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.533578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.533861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.533898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.534111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.534144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.534397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.534433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.534576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.534611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.534754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.534787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.535024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.535058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.535239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.535277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.535555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.535590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.535839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.535873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.536185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.536223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.536362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.536397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.536524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.536565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.536695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.536730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.536848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.536882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.537071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.537104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.537303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.537338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.537546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.537579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.537772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.537806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.538040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.538073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.538325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.538362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.538523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.538556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.538693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.538726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.538986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-12-09 17:38:13.539020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-12-09 17:38:13.539214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.539248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.539391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.539423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.539634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.539669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.539983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.540015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.540258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.540293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.540468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.540500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.540642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.540674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.540945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.540977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.541263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.541297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.541422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.541456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.541636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.541668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.541913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.541946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.542069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.542101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.542233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.542267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.542427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.542459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.542586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.542624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.542879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.542911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.543098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.543131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.543301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.543334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.543525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.543557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.543874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.543906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.544076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.544108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.544426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.544460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.544607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.544638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.544943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.544975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.545106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.545138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.545294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.545328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.545446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.545477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.545667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.545699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.546024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.546080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.546270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.546306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.546455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.546489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.546631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.546664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.546784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.546818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.547079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.547112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.547314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.547349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-12-09 17:38:13.547471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-12-09 17:38:13.547505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.547705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.547738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.547906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.547938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.548199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.548234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.548350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.548382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.548578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.548612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.548798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.548840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.549010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.549042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.549257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.549291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.549472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.549506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.549691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.549723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.549858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.549891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.550093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.550126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.550271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.550307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.550504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.550538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.550679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.550711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.550922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.550955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.551222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.551258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.551396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.551429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.551553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.551586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.551858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.551892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.552190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.552225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.552348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.552381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.552517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.552549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.552736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.552768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.552948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.552980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.553103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.553137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.553274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.553318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.553535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.553569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.553756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.553788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.553974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.554006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.554189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.554224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.554424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.554455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.554607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.554644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.554781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.554813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.554948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.554980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.555176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.555211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.555379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.555411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.555535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.555566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-12-09 17:38:13.555688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-12-09 17:38:13.555719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.556000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.556032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.556297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.556331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.556510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.556542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.556755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.556787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.556967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.557000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.557129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.557161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.557320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.557359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.557560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.557593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.557738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.557771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.558055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.558088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.558267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.558302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.558428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.558460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.558575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.558609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.558908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.558940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.559068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.559100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.559292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.559327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.559457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.559488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.559633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.559664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.559795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.559829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.560132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.560165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.560381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.560413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.560537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.560570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.560676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.560709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.560912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.560945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.561188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.561223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.561366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.561398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.561542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.561573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.561699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.561731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.561846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.561877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.562046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.562079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.562270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.562305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.562497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.562530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.562664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.562697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.562913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.562968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.563162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.563211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.563389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.563423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.563554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.563587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.563800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.563835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.564028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-12-09 17:38:13.564061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-12-09 17:38:13.564329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.564364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.564492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.564524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.564714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.564748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.564873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.564906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.565138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.565178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.565372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.565406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.565534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.565568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.565821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.565864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.566106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.566139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.566305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.085 [2024-12-09 17:38:13.566334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.085 [2024-12-09 17:38:13.566342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.085 [2024-12-09 17:38:13.566349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.085 [2024-12-09 17:38:13.566355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.085 [2024-12-09 17:38:13.566374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.566411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.566526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.566556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.566686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.566716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.566892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.566923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.567160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.567219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.567350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.567384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.567595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.567628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.567808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:47.085 [2024-12-09 17:38:13.567916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.567949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.567918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:47.085 [2024-12-09 17:38:13.568020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:47.085 [2024-12-09 17:38:13.568021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:47.085 [2024-12-09 17:38:13.568143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.568184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.568380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.568412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.568607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.568641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.568778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.568810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.568992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.569025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.569142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.569181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.569420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.569454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.569632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.569666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.569855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.569888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.570126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.570159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.570349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.570382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.570569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.570603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.570727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.570760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.570950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.570984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.571182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.571216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.571347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.571379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.571507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.571538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.571711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.571745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.571932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-12-09 17:38:13.571966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-12-09 17:38:13.572213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.572248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.572429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.572461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.572652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.572684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.572916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.572949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.573149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.573194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.573414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.573447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.573637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.573671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.573854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.573889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.574127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.574176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.574312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.574345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.574528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.574561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.574693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.574725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.574906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.574939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.575112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.575146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.575270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.575303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.575481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.575514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.575648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.575681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.575802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.575836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.576011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.576044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.576149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.576196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.576385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.576419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.576541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.576574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.576688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.576722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.576836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.576869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.577046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.577079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.577199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.577234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.577360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.577392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.577563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.577595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.577727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.577760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.577965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.577999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-12-09 17:38:13.578118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-12-09 17:38:13.578150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.361 [2024-12-09 17:38:13.578284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.361 [2024-12-09 17:38:13.578317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.361 qpair failed and we were unable to recover it. 00:27:47.361 [2024-12-09 17:38:13.578493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.361 [2024-12-09 17:38:13.578528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.361 qpair failed and we were unable to recover it. 00:27:47.361 [2024-12-09 17:38:13.578665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.361 [2024-12-09 17:38:13.578698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.361 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.578816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.578849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.579032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.579067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.579192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.579226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.579431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.579464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.579586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.579619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.579734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.579767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.579891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.579924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.580102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.580138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.580348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.580396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.580598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.580632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.580846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.580879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.581081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.581115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.581252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.581287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.581412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.581445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.581575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.581616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.581724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.581756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.581878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.581913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.582048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.582081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.582272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.582309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.582495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.582529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.582635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.582668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.582851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.582886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.583004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.583039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.583160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.583202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.583336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.583371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.583546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.583581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.583756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.583790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.584000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.584034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.584162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.584207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.584327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.584360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.584486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.584521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.584713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.584746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.584873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.584908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.585091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.585125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.585252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.585287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.585406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.585440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.585646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.585692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.585877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-12-09 17:38:13.585913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-12-09 17:38:13.586040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.586075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.586203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.586239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.586415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.586449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.586639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.586672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.586791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.586824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.586947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.586981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.587270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.587305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.587439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.587472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.587582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.587615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.587737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.587771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.587902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.587936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.588039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.588073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.588186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.588221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.588363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.588398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.588516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.588550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.588689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.588723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.588849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.588889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.588996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.589029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.589205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.589240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.589356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.589389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.589582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.589616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.589729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.589762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.589878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.589911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.590051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.590084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.590231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.590281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.590443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.590491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.590615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.590648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.590866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.590901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.591014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.591048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.591160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.591207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.591336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.591371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.591494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.591526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.591638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.591671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.591779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.591811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.591920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.591953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.592133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.592176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.592283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.592317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.592420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.592453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.592558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.592591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-12-09 17:38:13.592698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-12-09 17:38:13.592731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.592905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.592937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.593106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.593138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.593253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.593287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.593404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.593444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.593555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.593587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.593708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.593742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.593843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.593876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.594002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.594035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.594148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.594193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.594366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.594400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.594572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.594606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.594723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.594755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.594924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.594956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.595066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.595099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.595218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.595252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.595424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.595457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.595627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.595659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.595776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.595808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.595923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.595956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.596133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.596176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.596295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.596328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.596501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.596533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.596717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.596750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.596867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.596898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.597003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.597036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.597153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.597198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.597321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.597354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.597466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.597499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.597621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.597655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.597776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.597809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.597989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.598029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.598273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.598308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.598428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.598461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.598639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.598672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.598794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.598826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.598951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.598985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.599157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.599203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.599327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.599359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.599551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-12-09 17:38:13.599584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-12-09 17:38:13.599704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.599736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.599926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.599958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.600133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.600175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.600304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.600337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.600509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.600542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.600651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.600684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.600801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.600833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.600960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.600991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.601112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.601144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.601338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.601374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.601564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.601597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.601722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.601755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.601931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.601963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.602201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.602236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.602426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.602461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.602578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.602611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.602727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.602760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.602904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.602937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.603055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.603089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.603234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.603270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.603446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.603480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.603606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.603639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.603753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.603786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.603962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.603996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.604117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.604152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.604350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.604384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.604572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.604605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.604778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.604812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.604925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.604958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.605195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.605230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.605338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.605370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.605491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.605523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.605667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.605713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.605908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.605942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.606115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.606147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.606327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.606362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.606532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.606564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.606691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.606724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.606828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.606860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-12-09 17:38:13.606981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-12-09 17:38:13.607013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.607125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.607158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.607355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.607390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.607605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.607638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.607747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.607780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.607952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.607986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.608113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.608154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.608342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.608377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.608509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.608543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.608753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.608789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.609126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.609159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.609355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.609390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.609573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.609606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.609737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.609770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.609904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.609938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.610062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.610098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.610212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.610248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.610381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.610416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.610538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.610571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.610766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.610801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.610930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.610964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.611151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.611197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.611310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.611344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.611469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.611503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.611623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.611658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.611841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.611875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.612049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.612083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.612214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.612249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.612362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.612396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.612679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.612713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.612904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.612938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.613073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.613107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-12-09 17:38:13.613234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-12-09 17:38:13.613269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.613498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.613554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.613747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.613781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.613900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.613933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.614113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.614146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.614343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.614377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.614558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.614592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.614721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.614755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.614927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.614960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.615073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.615105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.615228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.615264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.615377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.615410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.615530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.615562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.615740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.615773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.616017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.616059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.616193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.616229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.616496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.616530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.616646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.616679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.616920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.616955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.617133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.617176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.617319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.617353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.617478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.617512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.617626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.617660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.617789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.617823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.617953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.617987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.618120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.618154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.618282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.618317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.618493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.618527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.618747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.618781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.618893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.618927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.619041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.619075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.619207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.619242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.619419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.619453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.619627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.619660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.619831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.619865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.619997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.620031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.620147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.620192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.620476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.620509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.620632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.620668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.620792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-12-09 17:38:13.620827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-12-09 17:38:13.621033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.621067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.621310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.621369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.621553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.621586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.621766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.621798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.621987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.622021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.622143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.622188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.622375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.622408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.622524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.622557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.622757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.622788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.622905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.622940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.623043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.623076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.623354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.623389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.623571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.623604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.623808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.623841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.623963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.623997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.624195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.624230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.624421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.624453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.624580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.624612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.624792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.624825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.624947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.624979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.625091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.625123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.625312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.625346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.625524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.625556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.625736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.625769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.625975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.626009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.626185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.626219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.626346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.626379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.626492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.626524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.626653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.626686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.626804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.626837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.626943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.626974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.627145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.627189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.627312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.627345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.627514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.627546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.627659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.627691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.627891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.627923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.628098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.628130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.628318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.628353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.628486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-12-09 17:38:13.628519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-12-09 17:38:13.628707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.628739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.629004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.629036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.629216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.629257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.629366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.629399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.629540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.629573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.629756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.629789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.629923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.629955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.630263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.630299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.630401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.630434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.630548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.630580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.630800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.630833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.631032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.631065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.631179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.631213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.631326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.631358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.631544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.631577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.631748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.631780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.632024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.632057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.632273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.632308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.632481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.632513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.632630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.632663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.632834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.632867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.632992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.633024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.633238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.633273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.633381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.633413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.633536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.633568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.633670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.633703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.633814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.633848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.634062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.634095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.634290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.634325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.634538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.634572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.634786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.634820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.635000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.635034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.635251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.635286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.635469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.635502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.635616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.635649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.635885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.635918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.636123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.636157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.636325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.636358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.636489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.636524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-12-09 17:38:13.636714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-12-09 17:38:13.636746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.636856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.636889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.637013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.637046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.637311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.637352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.637466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.637499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.637626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.637659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.637877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.637911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.638083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.638116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.638295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.638330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.638478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.638511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.638749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.638781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.638888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.638920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.639099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.639132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.639259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.639294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.639479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.639511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.639620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.639653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.639781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.639812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.639939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.639971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.640087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.640120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.640305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.640338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.640465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.640497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.640676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.640710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.640821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.640854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.640963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.640996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.641197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.641232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.641409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.641443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.641574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.641608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.641736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.641767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.641915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.641949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.642058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.642090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.642223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.642256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.642413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.642446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.642554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.642585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.642697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.642729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.642936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-12-09 17:38:13.642971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-12-09 17:38:13.643092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.643124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.643245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.643278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.643473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.643506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.643634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.643666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.643780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.643813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.644049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.644082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.644199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.644233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.644353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.644385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.644488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.644527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.644704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.644736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.644863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.644894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.645074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.645107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.645237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.645272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.645453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.645485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.645677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.645710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.645832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.645863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.645984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.646016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.646209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.646242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.646349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.646382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.646553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.646586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.646705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.646737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.646866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.646898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.647007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.647039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.647277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.647312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.647501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.647535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.647668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.647698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.647807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.647839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.647957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.647990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.648107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.648138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.648337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.648372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.648479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.648512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.648617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.648648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.648824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.648856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.649049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.649082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.649210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.649244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.649367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.649399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.649508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.649539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.649648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.649681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.649794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-12-09 17:38:13.649826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-12-09 17:38:13.650006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.650039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.650147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.650187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.650360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.650391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.650520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.650554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.650677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.650708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.650824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.650856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.651046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.651078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.651199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.651232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.651410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.651441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.651582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.651619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.651732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.651763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.651877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.651909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.652020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.652051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.652295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.652343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.652470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.652504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.652688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.652721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.652835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.652868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.652975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.653008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.653114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.653147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.653276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.653309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.653472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.653505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.653716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.653748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.653941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.653974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.654157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.654200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.654366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.654401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.654579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.654612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.654756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.654789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.654922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.654955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.655077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.655110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.655224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.655258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.655435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.655467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.655594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.655627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.655765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.655798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.655968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.656001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.656111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.656143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.656276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.656310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.656422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.656458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.656575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.656608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.656797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.656830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-12-09 17:38:13.656967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-12-09 17:38:13.657000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.657112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.657144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.657284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.657318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.657440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.657472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.657690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.657723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.657893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.657926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.658042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.658075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.658196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.658231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.658333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.658366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.658541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.658574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.658701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.658740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.658921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.658954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.659129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.659162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.659314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.659348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.659469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.659502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.659607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.659640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.659747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.659780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.659904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.659937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.660145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.660184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.660355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.660388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.660508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.660540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.660655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.660689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.660810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.660843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.661062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.661095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.661242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.661278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.661455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.661488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.661663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.661696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.661820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.661853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.661964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.661997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.373 [2024-12-09 17:38:13.662107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.662144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.662275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.662308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:47.373 [2024-12-09 17:38:13.662524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.662557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.662680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.662713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.662818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:47.373 [2024-12-09 17:38:13.662852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.662962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.662995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:47.373 [2024-12-09 17:38:13.663219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.663262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.663386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.663418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-12-09 17:38:13.663634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-12-09 17:38:13.663666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.663883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.663916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.664027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.664060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.664175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.664209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.664386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.664419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.664526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.664560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.664750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.664783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.664903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.664937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.665058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.665091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.665293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.665328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.665505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.665538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.665645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.665684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.665799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.665832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.665953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.665986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.666096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.666129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.666273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.666310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.666425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.666457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.666634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.666667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.666772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.666804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.666925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.666959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.667075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.667108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.667239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.667275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.667503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.667538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.667708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.667742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.667867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.667900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.668088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.668121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.668305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.668339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.668460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.668493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.668594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.668626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.668737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.668770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.668882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.668915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.669045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.669078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.669212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.669247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.669376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.669410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.669643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.669676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.669789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.669832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.669959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.669996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.670250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.670284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.670407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.670440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.670553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-12-09 17:38:13.670587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-12-09 17:38:13.670695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.670729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.670842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.670874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.671050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.671083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.671217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.671252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.671374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.671407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.671514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.671546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.671659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.671692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.671807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.671839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.672023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.672056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.672177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.672211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.672316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.672349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.672467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.672507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.672626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.672659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.672895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.672927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.673048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.673081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.673228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.673265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.673392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.673426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.673547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.673581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.673692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.673724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.673893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.673925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.674027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.674060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.674204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.674242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.674365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.674401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.674517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.674550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.674663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.674696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.674811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.674845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.674958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.674994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.675112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.675144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.675286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.675320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.675504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.675537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.675657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.675692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.675823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.675856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.675967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.675999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.676117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.676150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-12-09 17:38:13.676335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-12-09 17:38:13.676368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.676492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.676529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.676658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.676689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.676803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.676835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.676954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.676988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.677097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.677131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.677327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.677362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.677484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.677517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.677635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.677669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.677788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.677820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.677950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.677982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.678089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.678122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.678254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.678287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.678500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.678532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.678770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.678803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.678950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.679002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.679197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.679234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.679343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.679390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.679560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.679593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.679715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.679748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.679871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.679903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.680014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.680047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.680164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.680211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.680331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.680364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.680466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.680496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.680618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.680651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.680779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.680812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.680928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.680960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.681072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.681105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.681238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.681271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.681380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.681415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.681611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.681645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.681749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.681782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.681913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.681946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.682052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.682084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.682221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.682255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.682368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.682402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.682522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.682553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.682677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.682710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.682826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-12-09 17:38:13.682861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-12-09 17:38:13.682965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.682997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.683100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.683132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.683381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.683416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.683590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.683619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.683729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.683765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.683867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.683897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.684067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.684096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.684241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.684272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.684374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.684403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.684535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.684566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.684671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.684701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.684825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.684854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.684966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.684996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.685186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.685217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.685323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.685353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.685455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.685485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.685582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.685612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.685731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.685761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.685867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.685897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.686019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.686049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.686145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.686182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.686356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.686386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.686501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.686530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.686636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.686666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.686783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.686813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.687054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.687083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.687302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.687334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.687448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.687478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.687585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.687615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.687719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.687749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.687849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.687880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.688070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.688118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.688254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.688292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.688413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.688448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.688554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.688586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.688701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.688736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.688863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.688896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.689006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.689039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.689148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.689192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-12-09 17:38:13.689365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-12-09 17:38:13.689397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.689577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.689608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.689724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.689757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.689873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.689905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.690015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.690047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.690159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.690204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.690328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.690360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.690483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.690515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.690620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.690653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.690771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.690804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.690933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.690964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.691075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.691107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.691284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.691318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.691441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.691474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.691651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.691683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.691803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.691835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.691946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.691978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.692099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.692131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.692258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.692292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.692395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.692434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.692568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.692601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.692739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.692771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.692897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.692929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.693051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.693084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.693197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.693230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.693340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.693372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.693498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.693529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.693641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.693672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.693796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.693829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.693936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.693970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.694092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.694125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.694261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.694296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.694421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.694453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.694577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.694609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.694719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.694751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.694861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.694895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.695004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.695036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.695153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.695195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.695318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.695350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.695460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-12-09 17:38:13.695495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-12-09 17:38:13.695601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.695633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.695751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.695783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.695889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.695923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.696031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.696063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.696178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.696211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.696314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.696345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.696460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.696499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.696618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.696650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.696766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.696799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.696906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.696938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.697073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.697104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.697232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.697265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.697377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.697409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.697518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.697550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.697660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.697691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.697795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.697827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.697929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.697960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.698062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.698092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.698272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.698307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.698432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.698463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.698577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.698609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.698720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.698752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.698859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.698891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.698997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.699028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.699138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.699177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.379 [2024-12-09 17:38:13.699282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.699312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.699418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.699446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.699557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.699587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:47.379 [2024-12-09 17:38:13.699692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.699721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.699822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.699852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.699959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.699989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.379 [2024-12-09 17:38:13.700095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.700124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.379 [2024-12-09 17:38:13.700335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.700367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-12-09 17:38:13.700469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-12-09 17:38:13.700499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.700675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.700705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.700823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.700854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.700956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.700983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.701093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.701124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.701244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.701274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.701379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.701410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.701640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.701670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.701772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.701800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.701917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.701946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.702137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.702176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.702297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.702325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.702438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.702475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.702587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.702620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.702728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.702760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.702870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.702902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.703005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.703038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.703219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.703253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.703369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.703402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.703510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.703544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.703718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.703749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.703858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.703892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.703997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.704029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.704134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.704175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.704283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.704316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.704421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.704460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.704570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.704602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.704708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.704741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.704910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.704942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.705051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.705083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.705205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.705239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.705348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.705378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.705480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.705508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.705612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.705640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.705756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.705785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.705882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.705911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.706019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.706048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.706162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.706215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.706394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.706424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.706597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-12-09 17:38:13.706626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-12-09 17:38:13.706735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.706765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.706874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.706903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.707018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.707047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.707146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.707188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.707289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.707318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.707414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.707443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.707543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.707572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.707679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.707708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.707882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.707911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.708018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.708048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.708176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.708208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.708305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.708335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.708458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.708492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.708587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.708616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.708718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.708749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.708849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.708878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.708992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.709022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.709135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.709178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.709290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.709324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.709424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.709455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.709557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.709588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.709694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.709726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.709849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.709880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.709990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.710021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.710139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.710226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.710346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.710379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.710572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.710606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.710712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.710745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.710852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.710885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.710991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.711022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.711144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.711189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.711310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.711341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.711449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.711480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.711590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.711623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.711744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.711775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.711882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.711913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.712033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.712064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.712182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.712214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.712326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.712359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-12-09 17:38:13.712533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-12-09 17:38:13.712565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.712679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.712711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.712824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.712855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.713055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.713086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.713206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.713239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.713369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.713402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.713510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.713542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.713654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.713686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.713799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.713830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.713941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.713974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.714076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.714107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.714386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.714419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.714545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.714576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.714681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.714715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.714896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.714932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.715051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.715084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.715254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.715288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.715418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.715451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.715707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.715740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.715869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.715901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.716004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.716037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.716214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.716248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.716365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.716397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.716507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.716540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.716661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.716693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.716805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.716838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.717029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.717061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.717188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.717235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.717342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.717374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.717562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.717595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.717764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.717797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.717974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.718008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.718121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.718154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.718286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.718320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.718492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.718524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.718629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.718660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.718837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.718870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.719041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.719073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.719190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.719225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.719417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-12-09 17:38:13.719451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-12-09 17:38:13.719636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.719669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.719855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.719888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.720006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.720040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.720162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.720205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.720341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.720374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.720552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.720583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.720690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.720722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.720894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.720927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.721105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.721137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.721272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.721312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30ec000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.721451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.721488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.721605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.721637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.721764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.721797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.721977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.722010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.722253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.722293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.722398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.722431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.722671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.722704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.722810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.722842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.722950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.722983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.723157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.723213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.723331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.723362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.723475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.723508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.723629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.723662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.723767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.723800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.723910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.723942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.724068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.724100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.724304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.724339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.724446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.724477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.724618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.724650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.724833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.724865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.724994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.725028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.725145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.725188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.725307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.725339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.725527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.725559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.725738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.725771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.725883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.725916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.726102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.726134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.726322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.726356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.726540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.726574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-12-09 17:38:13.726752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-12-09 17:38:13.726786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.726964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.726997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.727103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.727141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.727328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.727361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.727496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.727530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.727702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.727735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.727853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.727886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.728063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.728097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.728233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.728268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.728447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.728480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.728587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.728619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.728757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.728791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.728900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.728932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.729042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.729075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.729249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.729284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.729455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.729487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.729606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.729638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.729832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.729865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.730039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.730073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.730197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.730232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.730338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.730370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.730507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.730540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.730648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.730681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.730864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.730900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.731025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.731058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.731192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.731225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.731328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.731363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.731487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.731520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.731690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.731721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.731912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.731952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.732061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.732093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.732205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.732239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.732357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.732391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.732534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.732567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.732743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.732774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-12-09 17:38:13.732900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-12-09 17:38:13.732933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.733111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.733145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.733257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.733291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.733471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.733503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.733740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.733772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.733878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.733912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.734038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.734069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.734242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.734276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.734408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.734440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.734545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.734578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 Malloc0 00:27:47.385 [2024-12-09 17:38:13.734778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.734811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.734936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.734969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.735146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.735186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.735362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.735394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.385 [2024-12-09 17:38:13.735505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.735538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.735729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.735762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:47.385 [2024-12-09 17:38:13.735948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.735980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.736092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.736125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.385 [2024-12-09 17:38:13.736388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.736424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.385 [2024-12-09 17:38:13.736542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.736574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.736689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.736722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.736822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.736855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.736970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.737002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.737124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.737157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.737368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.737402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.737532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.737564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.737682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.737714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.737838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.737871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.738064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.738097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.738281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.738316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.738503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.738535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.738733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.738766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.738934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.738966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.739152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.739200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.739311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.739343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.739466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.739498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.739613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.739646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.739819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.739852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-12-09 17:38:13.739956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-12-09 17:38:13.739988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.740100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.740132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.740250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.740284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.740409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.740442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.740551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.740583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.740699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.740732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.740914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.740948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.741139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.741200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.741316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.741349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.741465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.741497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.741627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.741660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.741839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.741871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.742001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.742035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.742141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.742184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.742297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.742330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.742456] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.386 [2024-12-09 17:38:13.742467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.742498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.742614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.742647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.742772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.742804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.742974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.743007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.743115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.743148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.743394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.743426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.743541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.743572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f261a0 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.743774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.743828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.743964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.743998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.744117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.744151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.744292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.744326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.744519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.744552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.744663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.744697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.744920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.744954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.745138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.745184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.745310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.745344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.745451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.745484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.745595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.745628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.745825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.745859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.746028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.746061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.746246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.746290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.746415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.746448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.746621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.746656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.746765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.746799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-12-09 17:38:13.746986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-12-09 17:38:13.747019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.747145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.747187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.747316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.747350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.747467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.747500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.747674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.747706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.747824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.747858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.748031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.748064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.748173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.748207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.748315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.748348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.748514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.748547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.748674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.748708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.748830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.748863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.749063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.749096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.749290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.749324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.749495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.749529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.749641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.749674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.749847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.749879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.750002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.750034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.750221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.750256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.750374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.750406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.750578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.750609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.750723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.750757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.750938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.750972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.751103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.387 [2024-12-09 17:38:13.751146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.751407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.751441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:47.387 [2024-12-09 17:38:13.751568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.751601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.751717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.751749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.387 [2024-12-09 17:38:13.751941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.751974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.752178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.387 [2024-12-09 17:38:13.752212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.752334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.752367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.752555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.752587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.752715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.752747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.752885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.752917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.753029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.753060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.753182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.753216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.753350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.753384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.753566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.753598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.753779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-12-09 17:38:13.753811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-12-09 17:38:13.753941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.753974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.754099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.754130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.754261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.754295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.754411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.754443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.754564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.754595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.754805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.754837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.755038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.755070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.755258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.755292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.755417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.755450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.755636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.755669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.755912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.755945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.756059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.756092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.756268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.756303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.756485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.756517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.756645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.756678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.756854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.756888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.757091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.757122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.757241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.757275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.757458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.757490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.757592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.757626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.757754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.757786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.757898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.757930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.758181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.758217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.758327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.758366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.758484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.758516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.758618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.758650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.758866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.758899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.759014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.759045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.388 [2024-12-09 17:38:13.759235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.759269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.759443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.759476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:47.388 [2024-12-09 17:38:13.759712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.759745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-12-09 17:38:13.759861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.759893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.388 [2024-12-09 17:38:13.760088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-12-09 17:38:13.760121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.388 [2024-12-09 17:38:13.760245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.760278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.760384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.760418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.760600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.760633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.760735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.760768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.760888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.760919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.761087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.761119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.761235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.761268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.761375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.761406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.761589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.761621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.761794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.761826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.762002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.762034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.762221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.762255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.762385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.762417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.762535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.762566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.762702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.762733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f0000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.762849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.762886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.763083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.763116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.763324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.763360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.763550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.763583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.763693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.763725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.763834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.763867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.763977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.764009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.764113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.764147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.764336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.764369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.764471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.764503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.764604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.764637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.764808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.764839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.764954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.764987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.765118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.765157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.765301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.765334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.765522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.765555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.765674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.765707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.765825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.765857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.765980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.766012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.766254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.766290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.766409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.766443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.766613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.766646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.766824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.766856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-12-09 17:38:13.766972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-12-09 17:38:13.767004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.390 [2024-12-09 17:38:13.767188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.767227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.767407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.767441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.767553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.390 [2024-12-09 17:38:13.767590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.767794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.767828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.390 [2024-12-09 17:38:13.768002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.768036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.390 [2024-12-09 17:38:13.768207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.768241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.768380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.768412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.768513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.768545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.768723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.768755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.768930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.768963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.769076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.769108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.769235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.769269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.769376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.769408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.769529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.769562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.769683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.769721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.769835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.769867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.769976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.770009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.770130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.770162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.770308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.770342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.770443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-12-09 17:38:13.770476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.770677] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.390 [2024-12-09 17:38:13.773208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.390 [2024-12-09 17:38:13.773332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.390 [2024-12-09 17:38:13.773379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.390 [2024-12-09 17:38:13.773403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.390 [2024-12-09 17:38:13.773424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.390 [2024-12-09 17:38:13.773494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.390 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:47.390 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.390 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.390 [2024-12-09 17:38:13.782990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.390 [2024-12-09 17:38:13.783096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.390 [2024-12-09 17:38:13.783123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.390 [2024-12-09 17:38:13.783139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.390 [2024-12-09 17:38:13.783153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.390 [2024-12-09 17:38:13.783201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.390 17:38:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2061193 00:27:47.390 [2024-12-09 17:38:13.793007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.390 [2024-12-09 17:38:13.793070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.390 [2024-12-09 17:38:13.793090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.390 [2024-12-09 17:38:13.793100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.390 [2024-12-09 17:38:13.793109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.390 [2024-12-09 17:38:13.793132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.803021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.390 [2024-12-09 17:38:13.803083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.390 [2024-12-09 17:38:13.803097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.390 [2024-12-09 17:38:13.803104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.390 [2024-12-09 17:38:13.803111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.390 [2024-12-09 17:38:13.803128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.813018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.390 [2024-12-09 17:38:13.813106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.390 [2024-12-09 17:38:13.813120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.390 [2024-12-09 17:38:13.813127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.390 [2024-12-09 17:38:13.813134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.390 [2024-12-09 17:38:13.813149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-12-09 17:38:13.823018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.391 [2024-12-09 17:38:13.823108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.391 [2024-12-09 17:38:13.823122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.391 [2024-12-09 17:38:13.823129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.391 [2024-12-09 17:38:13.823136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.391 [2024-12-09 17:38:13.823154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-12-09 17:38:13.833005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.391 [2024-12-09 17:38:13.833091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.391 [2024-12-09 17:38:13.833105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.391 [2024-12-09 17:38:13.833112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.391 [2024-12-09 17:38:13.833118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.391 [2024-12-09 17:38:13.833133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-12-09 17:38:13.843113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.391 [2024-12-09 17:38:13.843192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.391 [2024-12-09 17:38:13.843207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.391 [2024-12-09 17:38:13.843214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.391 [2024-12-09 17:38:13.843221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.391 [2024-12-09 17:38:13.843236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-12-09 17:38:13.853140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.391 [2024-12-09 17:38:13.853203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.391 [2024-12-09 17:38:13.853216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.391 [2024-12-09 17:38:13.853223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.391 [2024-12-09 17:38:13.853229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.391 [2024-12-09 17:38:13.853245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-12-09 17:38:13.863139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.391 [2024-12-09 17:38:13.863226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.391 [2024-12-09 17:38:13.863239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.391 [2024-12-09 17:38:13.863246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.391 [2024-12-09 17:38:13.863252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.391 [2024-12-09 17:38:13.863268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-12-09 17:38:13.873111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.391 [2024-12-09 17:38:13.873170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.391 [2024-12-09 17:38:13.873184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.391 [2024-12-09 17:38:13.873191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.391 [2024-12-09 17:38:13.873197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.391 [2024-12-09 17:38:13.873212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-12-09 17:38:13.883233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.391 [2024-12-09 17:38:13.883290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.391 [2024-12-09 17:38:13.883303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.391 [2024-12-09 17:38:13.883310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.391 [2024-12-09 17:38:13.883316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.391 [2024-12-09 17:38:13.883332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:13.893243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.651 [2024-12-09 17:38:13.893299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.651 [2024-12-09 17:38:13.893312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.651 [2024-12-09 17:38:13.893319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.651 [2024-12-09 17:38:13.893326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.651 [2024-12-09 17:38:13.893341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.651 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:13.903270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.651 [2024-12-09 17:38:13.903326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.651 [2024-12-09 17:38:13.903339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.651 [2024-12-09 17:38:13.903347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.651 [2024-12-09 17:38:13.903353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.651 [2024-12-09 17:38:13.903368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.651 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:13.913317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.651 [2024-12-09 17:38:13.913382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.651 [2024-12-09 17:38:13.913401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.651 [2024-12-09 17:38:13.913408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.651 [2024-12-09 17:38:13.913415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.651 [2024-12-09 17:38:13.913430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.651 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:13.923378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.651 [2024-12-09 17:38:13.923463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.651 [2024-12-09 17:38:13.923478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.651 [2024-12-09 17:38:13.923485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.651 [2024-12-09 17:38:13.923491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.651 [2024-12-09 17:38:13.923507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.651 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:13.933347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.651 [2024-12-09 17:38:13.933405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.651 [2024-12-09 17:38:13.933418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.651 [2024-12-09 17:38:13.933425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.651 [2024-12-09 17:38:13.933432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.651 [2024-12-09 17:38:13.933447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.651 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:13.943402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.651 [2024-12-09 17:38:13.943464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.651 [2024-12-09 17:38:13.943477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.651 [2024-12-09 17:38:13.943484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.651 [2024-12-09 17:38:13.943491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.651 [2024-12-09 17:38:13.943506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.651 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:13.953401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.651 [2024-12-09 17:38:13.953454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.651 [2024-12-09 17:38:13.953467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.651 [2024-12-09 17:38:13.953474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.651 [2024-12-09 17:38:13.953484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.651 [2024-12-09 17:38:13.953499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.651 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:13.963431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.651 [2024-12-09 17:38:13.963491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.651 [2024-12-09 17:38:13.963504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.651 [2024-12-09 17:38:13.963511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.651 [2024-12-09 17:38:13.963517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.651 [2024-12-09 17:38:13.963532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.651 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:13.973455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.651 [2024-12-09 17:38:13.973536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.651 [2024-12-09 17:38:13.973549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.651 [2024-12-09 17:38:13.973555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.651 [2024-12-09 17:38:13.973561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.651 [2024-12-09 17:38:13.973576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.651 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:13.983475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.651 [2024-12-09 17:38:13.983531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.651 [2024-12-09 17:38:13.983544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.651 [2024-12-09 17:38:13.983550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.651 [2024-12-09 17:38:13.983556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.651 [2024-12-09 17:38:13.983571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.651 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:13.993508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.651 [2024-12-09 17:38:13.993559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.651 [2024-12-09 17:38:13.993573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.651 [2024-12-09 17:38:13.993579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.651 [2024-12-09 17:38:13.993585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.651 [2024-12-09 17:38:13.993601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.651 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:14.003564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.651 [2024-12-09 17:38:14.003622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.651 [2024-12-09 17:38:14.003636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.651 [2024-12-09 17:38:14.003642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.651 [2024-12-09 17:38:14.003648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.651 [2024-12-09 17:38:14.003663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.651 qpair failed and we were unable to recover it. 00:27:47.651 [2024-12-09 17:38:14.013558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.013657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.013671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.013677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.013683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.013698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.023616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.023682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.023696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.023703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.023709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.023724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.033647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.033700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.033713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.033719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.033726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.033741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.043673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.043735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.043751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.043758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.043764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.043779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.053625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.053679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.053691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.053699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.053705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.053720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.063711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.063774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.063788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.063794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.063801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.063816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.073781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.073846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.073859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.073866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.073873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.073888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.083788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.083845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.083859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.083866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.083875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.083891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.093809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.093893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.093906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.093914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.093920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.093934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.103836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.103885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.103898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.103904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.103911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.103926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.113873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.113937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.113950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.113957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.113963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.113978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.123889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.123948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.123963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.123970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.123976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.123991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.133927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.133986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.134001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.652 [2024-12-09 17:38:14.134008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.652 [2024-12-09 17:38:14.134014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.652 [2024-12-09 17:38:14.134029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.652 qpair failed and we were unable to recover it. 00:27:47.652 [2024-12-09 17:38:14.143943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.652 [2024-12-09 17:38:14.144004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.652 [2024-12-09 17:38:14.144019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.653 [2024-12-09 17:38:14.144027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.653 [2024-12-09 17:38:14.144033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.653 [2024-12-09 17:38:14.144048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.653 qpair failed and we were unable to recover it. 00:27:47.653 [2024-12-09 17:38:14.154021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.653 [2024-12-09 17:38:14.154090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.653 [2024-12-09 17:38:14.154104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.653 [2024-12-09 17:38:14.154112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.653 [2024-12-09 17:38:14.154118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.653 [2024-12-09 17:38:14.154134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.653 qpair failed and we were unable to recover it. 00:27:47.653 [2024-12-09 17:38:14.164073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.653 [2024-12-09 17:38:14.164174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.653 [2024-12-09 17:38:14.164190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.653 [2024-12-09 17:38:14.164198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.653 [2024-12-09 17:38:14.164204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.653 [2024-12-09 17:38:14.164232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.653 qpair failed and we were unable to recover it. 00:27:47.653 [2024-12-09 17:38:14.174043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.653 [2024-12-09 17:38:14.174094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.653 [2024-12-09 17:38:14.174111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.653 [2024-12-09 17:38:14.174118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.653 [2024-12-09 17:38:14.174124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.653 [2024-12-09 17:38:14.174139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.653 qpair failed and we were unable to recover it. 00:27:47.653 [2024-12-09 17:38:14.184052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.653 [2024-12-09 17:38:14.184135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.653 [2024-12-09 17:38:14.184148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.653 [2024-12-09 17:38:14.184155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.653 [2024-12-09 17:38:14.184161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.653 [2024-12-09 17:38:14.184181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.653 qpair failed and we were unable to recover it. 00:27:47.912 [2024-12-09 17:38:14.194103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.912 [2024-12-09 17:38:14.194164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.912 [2024-12-09 17:38:14.194181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.912 [2024-12-09 17:38:14.194188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.912 [2024-12-09 17:38:14.194195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.912 [2024-12-09 17:38:14.194210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.912 qpair failed and we were unable to recover it. 00:27:47.912 [2024-12-09 17:38:14.204150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.912 [2024-12-09 17:38:14.204213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.912 [2024-12-09 17:38:14.204226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.912 [2024-12-09 17:38:14.204233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.912 [2024-12-09 17:38:14.204240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.912 [2024-12-09 17:38:14.204256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.912 qpair failed and we were unable to recover it. 00:27:47.912 [2024-12-09 17:38:14.214146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.912 [2024-12-09 17:38:14.214240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.912 [2024-12-09 17:38:14.214254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.912 [2024-12-09 17:38:14.214264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.912 [2024-12-09 17:38:14.214271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.912 [2024-12-09 17:38:14.214286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.912 qpair failed and we were unable to recover it. 00:27:47.912 [2024-12-09 17:38:14.224110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.912 [2024-12-09 17:38:14.224176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.912 [2024-12-09 17:38:14.224191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.912 [2024-12-09 17:38:14.224198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.912 [2024-12-09 17:38:14.224205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.912 [2024-12-09 17:38:14.224220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.912 qpair failed and we were unable to recover it. 00:27:47.912 [2024-12-09 17:38:14.234209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.912 [2024-12-09 17:38:14.234268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.912 [2024-12-09 17:38:14.234281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.912 [2024-12-09 17:38:14.234288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.234295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.234310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.244246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.244306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.244320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.244327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.244334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.244349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.254286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.254340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.254353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.254360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.254367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.254383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.264316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.264367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.264380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.264386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.264392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.264408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.274320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.274374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.274386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.274393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.274400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.274415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.284395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.284455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.284467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.284474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.284480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.284495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.294338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.294393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.294406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.294413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.294419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.294434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.304419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.304478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.304491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.304498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.304505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.304521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.314440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.314493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.314506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.314513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.314519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.314535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.324438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.324496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.324510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.324517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.324524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.324540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.334513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.334575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.334588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.334595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.334601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.334616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.344525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.344576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.344589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.344600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.344606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.344622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.354518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.354571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.354584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.354591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.354597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.354612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.364592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.913 [2024-12-09 17:38:14.364659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.913 [2024-12-09 17:38:14.364672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.913 [2024-12-09 17:38:14.364679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.913 [2024-12-09 17:38:14.364686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.913 [2024-12-09 17:38:14.364702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.913 qpair failed and we were unable to recover it. 00:27:47.913 [2024-12-09 17:38:14.374654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.914 [2024-12-09 17:38:14.374723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.914 [2024-12-09 17:38:14.374736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.914 [2024-12-09 17:38:14.374743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.914 [2024-12-09 17:38:14.374749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.914 [2024-12-09 17:38:14.374764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.914 qpair failed and we were unable to recover it. 00:27:47.914 [2024-12-09 17:38:14.384639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.914 [2024-12-09 17:38:14.384690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.914 [2024-12-09 17:38:14.384703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.914 [2024-12-09 17:38:14.384709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.914 [2024-12-09 17:38:14.384716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.914 [2024-12-09 17:38:14.384734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.914 qpair failed and we were unable to recover it. 00:27:47.914 [2024-12-09 17:38:14.394668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.914 [2024-12-09 17:38:14.394721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.914 [2024-12-09 17:38:14.394735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.914 [2024-12-09 17:38:14.394742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.914 [2024-12-09 17:38:14.394748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.914 [2024-12-09 17:38:14.394763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.914 qpair failed and we were unable to recover it. 00:27:47.914 [2024-12-09 17:38:14.404653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.914 [2024-12-09 17:38:14.404709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.914 [2024-12-09 17:38:14.404722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.914 [2024-12-09 17:38:14.404729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.914 [2024-12-09 17:38:14.404735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.914 [2024-12-09 17:38:14.404750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.914 qpair failed and we were unable to recover it. 00:27:47.914 [2024-12-09 17:38:14.414730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.914 [2024-12-09 17:38:14.414785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.914 [2024-12-09 17:38:14.414798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.914 [2024-12-09 17:38:14.414805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.914 [2024-12-09 17:38:14.414811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.914 [2024-12-09 17:38:14.414827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.914 qpair failed and we were unable to recover it. 00:27:47.914 [2024-12-09 17:38:14.424744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.914 [2024-12-09 17:38:14.424819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.914 [2024-12-09 17:38:14.424833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.914 [2024-12-09 17:38:14.424840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.914 [2024-12-09 17:38:14.424846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.914 [2024-12-09 17:38:14.424861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.914 qpair failed and we were unable to recover it. 00:27:47.914 [2024-12-09 17:38:14.434771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.914 [2024-12-09 17:38:14.434825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.914 [2024-12-09 17:38:14.434838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.914 [2024-12-09 17:38:14.434845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.914 [2024-12-09 17:38:14.434852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.914 [2024-12-09 17:38:14.434867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.914 qpair failed and we were unable to recover it. 00:27:47.914 [2024-12-09 17:38:14.444810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.914 [2024-12-09 17:38:14.444864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.914 [2024-12-09 17:38:14.444878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.914 [2024-12-09 17:38:14.444885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.914 [2024-12-09 17:38:14.444891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:47.914 [2024-12-09 17:38:14.444906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.914 qpair failed and we were unable to recover it. 00:27:48.173 [2024-12-09 17:38:14.454853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.173 [2024-12-09 17:38:14.454910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.173 [2024-12-09 17:38:14.454924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.173 [2024-12-09 17:38:14.454931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.173 [2024-12-09 17:38:14.454937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.173 [2024-12-09 17:38:14.454952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.173 qpair failed and we were unable to recover it. 00:27:48.173 [2024-12-09 17:38:14.464883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.173 [2024-12-09 17:38:14.464935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.173 [2024-12-09 17:38:14.464948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.173 [2024-12-09 17:38:14.464954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.173 [2024-12-09 17:38:14.464961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.173 [2024-12-09 17:38:14.464976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.173 qpair failed and we were unable to recover it. 00:27:48.173 [2024-12-09 17:38:14.474898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.173 [2024-12-09 17:38:14.474950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.173 [2024-12-09 17:38:14.474966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.173 [2024-12-09 17:38:14.474973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.173 [2024-12-09 17:38:14.474979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.173 [2024-12-09 17:38:14.474994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.173 qpair failed and we were unable to recover it. 00:27:48.173 [2024-12-09 17:38:14.484864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.173 [2024-12-09 17:38:14.484922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.173 [2024-12-09 17:38:14.484935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.173 [2024-12-09 17:38:14.484942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.173 [2024-12-09 17:38:14.484948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.173 [2024-12-09 17:38:14.484964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.173 qpair failed and we were unable to recover it. 00:27:48.173 [2024-12-09 17:38:14.494944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.173 [2024-12-09 17:38:14.495026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.173 [2024-12-09 17:38:14.495040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.173 [2024-12-09 17:38:14.495047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.173 [2024-12-09 17:38:14.495053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.495068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.174 qpair failed and we were unable to recover it. 00:27:48.174 [2024-12-09 17:38:14.504984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.174 [2024-12-09 17:38:14.505050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.174 [2024-12-09 17:38:14.505063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.174 [2024-12-09 17:38:14.505069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.174 [2024-12-09 17:38:14.505076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.505091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.174 qpair failed and we were unable to recover it. 00:27:48.174 [2024-12-09 17:38:14.515058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.174 [2024-12-09 17:38:14.515109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.174 [2024-12-09 17:38:14.515122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.174 [2024-12-09 17:38:14.515129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.174 [2024-12-09 17:38:14.515139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.515154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.174 qpair failed and we were unable to recover it. 00:27:48.174 [2024-12-09 17:38:14.525052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.174 [2024-12-09 17:38:14.525107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.174 [2024-12-09 17:38:14.525121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.174 [2024-12-09 17:38:14.525128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.174 [2024-12-09 17:38:14.525135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.525151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.174 qpair failed and we were unable to recover it. 00:27:48.174 [2024-12-09 17:38:14.535065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.174 [2024-12-09 17:38:14.535133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.174 [2024-12-09 17:38:14.535146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.174 [2024-12-09 17:38:14.535153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.174 [2024-12-09 17:38:14.535159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.535178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.174 qpair failed and we were unable to recover it. 00:27:48.174 [2024-12-09 17:38:14.545098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.174 [2024-12-09 17:38:14.545151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.174 [2024-12-09 17:38:14.545164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.174 [2024-12-09 17:38:14.545175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.174 [2024-12-09 17:38:14.545181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.545197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.174 qpair failed and we were unable to recover it. 00:27:48.174 [2024-12-09 17:38:14.555178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.174 [2024-12-09 17:38:14.555231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.174 [2024-12-09 17:38:14.555244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.174 [2024-12-09 17:38:14.555251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.174 [2024-12-09 17:38:14.555258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.555273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.174 qpair failed and we were unable to recover it. 00:27:48.174 [2024-12-09 17:38:14.565164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.174 [2024-12-09 17:38:14.565229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.174 [2024-12-09 17:38:14.565242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.174 [2024-12-09 17:38:14.565249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.174 [2024-12-09 17:38:14.565255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.565270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.174 qpair failed and we were unable to recover it. 00:27:48.174 [2024-12-09 17:38:14.575273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.174 [2024-12-09 17:38:14.575352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.174 [2024-12-09 17:38:14.575366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.174 [2024-12-09 17:38:14.575373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.174 [2024-12-09 17:38:14.575379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.575395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.174 qpair failed and we were unable to recover it. 00:27:48.174 [2024-12-09 17:38:14.585234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.174 [2024-12-09 17:38:14.585298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.174 [2024-12-09 17:38:14.585311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.174 [2024-12-09 17:38:14.585318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.174 [2024-12-09 17:38:14.585325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.585340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.174 qpair failed and we were unable to recover it. 00:27:48.174 [2024-12-09 17:38:14.595276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.174 [2024-12-09 17:38:14.595328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.174 [2024-12-09 17:38:14.595342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.174 [2024-12-09 17:38:14.595349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.174 [2024-12-09 17:38:14.595355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.595370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.174 qpair failed and we were unable to recover it. 00:27:48.174 [2024-12-09 17:38:14.605317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.174 [2024-12-09 17:38:14.605376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.174 [2024-12-09 17:38:14.605394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.174 [2024-12-09 17:38:14.605401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.174 [2024-12-09 17:38:14.605407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.605422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.174 qpair failed and we were unable to recover it. 00:27:48.174 [2024-12-09 17:38:14.615324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.174 [2024-12-09 17:38:14.615379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.174 [2024-12-09 17:38:14.615393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.174 [2024-12-09 17:38:14.615400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.174 [2024-12-09 17:38:14.615406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.174 [2024-12-09 17:38:14.615421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.175 qpair failed and we were unable to recover it. 00:27:48.175 [2024-12-09 17:38:14.625315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.175 [2024-12-09 17:38:14.625376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.175 [2024-12-09 17:38:14.625389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.175 [2024-12-09 17:38:14.625397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.175 [2024-12-09 17:38:14.625403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.175 [2024-12-09 17:38:14.625418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.175 qpair failed and we were unable to recover it. 00:27:48.175 [2024-12-09 17:38:14.635365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.175 [2024-12-09 17:38:14.635416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.175 [2024-12-09 17:38:14.635429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.175 [2024-12-09 17:38:14.635436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.175 [2024-12-09 17:38:14.635443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.175 [2024-12-09 17:38:14.635458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.175 qpair failed and we were unable to recover it. 00:27:48.175 [2024-12-09 17:38:14.645399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.175 [2024-12-09 17:38:14.645456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.175 [2024-12-09 17:38:14.645470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.175 [2024-12-09 17:38:14.645477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.175 [2024-12-09 17:38:14.645486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.175 [2024-12-09 17:38:14.645502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.175 qpair failed and we were unable to recover it. 00:27:48.175 [2024-12-09 17:38:14.655484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.175 [2024-12-09 17:38:14.655543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.175 [2024-12-09 17:38:14.655557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.175 [2024-12-09 17:38:14.655564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.175 [2024-12-09 17:38:14.655570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.175 [2024-12-09 17:38:14.655586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.175 qpair failed and we were unable to recover it. 00:27:48.175 [2024-12-09 17:38:14.665440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.175 [2024-12-09 17:38:14.665530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.175 [2024-12-09 17:38:14.665543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.175 [2024-12-09 17:38:14.665551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.175 [2024-12-09 17:38:14.665557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.175 [2024-12-09 17:38:14.665572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.175 qpair failed and we were unable to recover it. 00:27:48.175 [2024-12-09 17:38:14.675498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.175 [2024-12-09 17:38:14.675560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.175 [2024-12-09 17:38:14.675573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.175 [2024-12-09 17:38:14.675580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.175 [2024-12-09 17:38:14.675587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.175 [2024-12-09 17:38:14.675603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.175 qpair failed and we were unable to recover it. 00:27:48.175 [2024-12-09 17:38:14.685499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.175 [2024-12-09 17:38:14.685553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.175 [2024-12-09 17:38:14.685566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.175 [2024-12-09 17:38:14.685572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.175 [2024-12-09 17:38:14.685579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.175 [2024-12-09 17:38:14.685595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.175 qpair failed and we were unable to recover it. 00:27:48.175 [2024-12-09 17:38:14.695541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.175 [2024-12-09 17:38:14.695598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.175 [2024-12-09 17:38:14.695612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.175 [2024-12-09 17:38:14.695619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.175 [2024-12-09 17:38:14.695625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.175 [2024-12-09 17:38:14.695640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.175 qpair failed and we were unable to recover it. 00:27:48.175 [2024-12-09 17:38:14.705568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.175 [2024-12-09 17:38:14.705622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.175 [2024-12-09 17:38:14.705635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.175 [2024-12-09 17:38:14.705641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.175 [2024-12-09 17:38:14.705648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.175 [2024-12-09 17:38:14.705663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.175 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.715641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.715700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.715714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.435 [2024-12-09 17:38:14.715720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.435 [2024-12-09 17:38:14.715727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.435 [2024-12-09 17:38:14.715742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.435 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.725647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.725708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.725722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.435 [2024-12-09 17:38:14.725730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.435 [2024-12-09 17:38:14.725736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.435 [2024-12-09 17:38:14.725752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.435 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.735681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.735742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.735758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.435 [2024-12-09 17:38:14.735765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.435 [2024-12-09 17:38:14.735771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.435 [2024-12-09 17:38:14.735787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.435 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.745686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.745743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.745756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.435 [2024-12-09 17:38:14.745763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.435 [2024-12-09 17:38:14.745769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.435 [2024-12-09 17:38:14.745785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.435 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.755705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.755760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.755774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.435 [2024-12-09 17:38:14.755781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.435 [2024-12-09 17:38:14.755787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.435 [2024-12-09 17:38:14.755803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.435 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.765747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.765807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.765820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.435 [2024-12-09 17:38:14.765826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.435 [2024-12-09 17:38:14.765833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.435 [2024-12-09 17:38:14.765849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.435 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.775793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.775857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.775870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.435 [2024-12-09 17:38:14.775880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.435 [2024-12-09 17:38:14.775886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.435 [2024-12-09 17:38:14.775901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.435 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.785821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.785880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.785893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.435 [2024-12-09 17:38:14.785900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.435 [2024-12-09 17:38:14.785906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.435 [2024-12-09 17:38:14.785922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.435 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.795817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.795886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.795899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.435 [2024-12-09 17:38:14.795906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.435 [2024-12-09 17:38:14.795913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.435 [2024-12-09 17:38:14.795928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.435 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.805877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.805936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.805949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.435 [2024-12-09 17:38:14.805956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.435 [2024-12-09 17:38:14.805962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.435 [2024-12-09 17:38:14.805978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.435 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.815879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.815932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.815946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.435 [2024-12-09 17:38:14.815952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.435 [2024-12-09 17:38:14.815960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.435 [2024-12-09 17:38:14.815978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.435 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.825947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.826009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.826023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.435 [2024-12-09 17:38:14.826030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.435 [2024-12-09 17:38:14.826037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.435 [2024-12-09 17:38:14.826052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.435 qpair failed and we were unable to recover it. 00:27:48.435 [2024-12-09 17:38:14.835932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.435 [2024-12-09 17:38:14.835985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.435 [2024-12-09 17:38:14.835998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.836004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.836011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.836026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.845998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.846079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.846093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.846099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.846106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.846120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.856007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.856059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.856073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.856080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.856088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.856103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.866022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.866081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.866094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.866101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.866107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.866122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.876093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.876146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.876159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.876170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.876177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.876192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.886129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.886209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.886222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.886229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.886235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.886252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.896113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.896176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.896188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.896196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.896202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.896217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.906190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.906244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.906257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.906267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.906274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.906289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.916182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.916236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.916249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.916256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.916261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.916276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.926250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.926357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.926371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.926378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.926384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.926400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.936158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.936217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.936231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.936238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.936244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.936259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.946255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.946312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.946325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.946333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.946339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.946358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.956292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.956349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.956362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.956369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.956375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.956390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.436 [2024-12-09 17:38:14.966320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.436 [2024-12-09 17:38:14.966378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.436 [2024-12-09 17:38:14.966391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.436 [2024-12-09 17:38:14.966398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.436 [2024-12-09 17:38:14.966404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.436 [2024-12-09 17:38:14.966419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.436 qpair failed and we were unable to recover it. 00:27:48.695 [2024-12-09 17:38:14.976333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.695 [2024-12-09 17:38:14.976391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.695 [2024-12-09 17:38:14.976404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.695 [2024-12-09 17:38:14.976411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.695 [2024-12-09 17:38:14.976417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.695 [2024-12-09 17:38:14.976432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.695 qpair failed and we were unable to recover it. 00:27:48.695 [2024-12-09 17:38:14.986378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.695 [2024-12-09 17:38:14.986436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.695 [2024-12-09 17:38:14.986449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.695 [2024-12-09 17:38:14.986455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.695 [2024-12-09 17:38:14.986462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.695 [2024-12-09 17:38:14.986477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.695 qpair failed and we were unable to recover it. 00:27:48.695 [2024-12-09 17:38:14.996331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.695 [2024-12-09 17:38:14.996385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.695 [2024-12-09 17:38:14.996398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.695 [2024-12-09 17:38:14.996405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.695 [2024-12-09 17:38:14.996411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.695 [2024-12-09 17:38:14.996427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.695 qpair failed and we were unable to recover it. 00:27:48.695 [2024-12-09 17:38:15.006390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.695 [2024-12-09 17:38:15.006447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.695 [2024-12-09 17:38:15.006461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.695 [2024-12-09 17:38:15.006468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.695 [2024-12-09 17:38:15.006474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.695 [2024-12-09 17:38:15.006489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.695 qpair failed and we were unable to recover it. 00:27:48.695 [2024-12-09 17:38:15.016388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.695 [2024-12-09 17:38:15.016448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.695 [2024-12-09 17:38:15.016461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.695 [2024-12-09 17:38:15.016468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.695 [2024-12-09 17:38:15.016475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.016490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.026488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.026559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.026574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.026580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.026587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.026602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.036506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.036562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.036578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.036585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.036592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.036607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.046569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.046642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.046655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.046662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.046669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.046683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.056565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.056633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.056646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.056654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.056659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.056674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.066591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.066645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.066657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.066664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.066671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.066685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.076545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.076634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.076647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.076654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.076663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.076677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.086662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.086717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.086730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.086737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.086743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.086758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.096685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.096741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.096753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.096760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.096766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.096781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.106640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.106696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.106709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.106716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.106722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.106736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.116692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.116761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.116774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.116781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.116788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.116803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.126723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.126780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.126794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.126800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.126808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.126823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.136778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.136834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.136847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.136854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.136861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.136876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.146757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.696 [2024-12-09 17:38:15.146812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.696 [2024-12-09 17:38:15.146825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.696 [2024-12-09 17:38:15.146832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.696 [2024-12-09 17:38:15.146838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.696 [2024-12-09 17:38:15.146853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.696 qpair failed and we were unable to recover it. 00:27:48.696 [2024-12-09 17:38:15.156838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.697 [2024-12-09 17:38:15.156891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.697 [2024-12-09 17:38:15.156904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.697 [2024-12-09 17:38:15.156911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.697 [2024-12-09 17:38:15.156918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.697 [2024-12-09 17:38:15.156933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.697 qpair failed and we were unable to recover it. 00:27:48.697 [2024-12-09 17:38:15.166821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.697 [2024-12-09 17:38:15.166876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.697 [2024-12-09 17:38:15.166893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.697 [2024-12-09 17:38:15.166900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.697 [2024-12-09 17:38:15.166906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.697 [2024-12-09 17:38:15.166922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.697 qpair failed and we were unable to recover it. 00:27:48.697 [2024-12-09 17:38:15.176904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.697 [2024-12-09 17:38:15.176962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.697 [2024-12-09 17:38:15.176975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.697 [2024-12-09 17:38:15.176982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.697 [2024-12-09 17:38:15.176988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.697 [2024-12-09 17:38:15.177003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.697 qpair failed and we were unable to recover it. 00:27:48.697 [2024-12-09 17:38:15.186929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.697 [2024-12-09 17:38:15.186985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.697 [2024-12-09 17:38:15.186998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.697 [2024-12-09 17:38:15.187004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.697 [2024-12-09 17:38:15.187011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.697 [2024-12-09 17:38:15.187027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.697 qpair failed and we were unable to recover it. 00:27:48.697 [2024-12-09 17:38:15.196950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.697 [2024-12-09 17:38:15.197004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.697 [2024-12-09 17:38:15.197018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.697 [2024-12-09 17:38:15.197025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.697 [2024-12-09 17:38:15.197032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.697 [2024-12-09 17:38:15.197047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.697 qpair failed and we were unable to recover it. 00:27:48.697 [2024-12-09 17:38:15.206981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.697 [2024-12-09 17:38:15.207039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.697 [2024-12-09 17:38:15.207052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.697 [2024-12-09 17:38:15.207058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.697 [2024-12-09 17:38:15.207068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.697 [2024-12-09 17:38:15.207083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.697 qpair failed and we were unable to recover it. 00:27:48.697 [2024-12-09 17:38:15.217057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.697 [2024-12-09 17:38:15.217112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.697 [2024-12-09 17:38:15.217125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.697 [2024-12-09 17:38:15.217132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.697 [2024-12-09 17:38:15.217138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.697 [2024-12-09 17:38:15.217153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.697 qpair failed and we were unable to recover it. 00:27:48.697 [2024-12-09 17:38:15.227040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.697 [2024-12-09 17:38:15.227099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.697 [2024-12-09 17:38:15.227112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.697 [2024-12-09 17:38:15.227120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.697 [2024-12-09 17:38:15.227126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.697 [2024-12-09 17:38:15.227142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.697 qpair failed and we were unable to recover it. 00:27:48.955 [2024-12-09 17:38:15.237122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.955 [2024-12-09 17:38:15.237186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.955 [2024-12-09 17:38:15.237200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.955 [2024-12-09 17:38:15.237207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.955 [2024-12-09 17:38:15.237213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.955 [2024-12-09 17:38:15.237228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-12-09 17:38:15.247140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.955 [2024-12-09 17:38:15.247210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.955 [2024-12-09 17:38:15.247223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.955 [2024-12-09 17:38:15.247230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.955 [2024-12-09 17:38:15.247237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.955 [2024-12-09 17:38:15.247252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-12-09 17:38:15.257108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.955 [2024-12-09 17:38:15.257162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.257179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.257185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.257192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.257207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.267105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.267165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.267181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.267188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.267194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.267211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.277170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.277245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.277258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.277265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.277271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.277286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.287220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.287280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.287293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.287300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.287307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.287322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.297241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.297300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.297318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.297325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.297332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.297347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.307262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.307318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.307331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.307339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.307345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.307360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.317275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.317334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.317347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.317354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.317361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.317376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.327370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.327427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.327441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.327448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.327454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.327469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.337348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.337420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.337433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.337443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.337450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.337466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.347402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.347457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.347470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.347476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.347482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.347498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.357436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.357493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.357505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.357512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.357518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.357533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.367424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.367514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.367527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.367534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.367540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.367555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.377504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.377566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.377580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.956 [2024-12-09 17:38:15.377587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.956 [2024-12-09 17:38:15.377593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.956 [2024-12-09 17:38:15.377612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-12-09 17:38:15.387476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.956 [2024-12-09 17:38:15.387562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.956 [2024-12-09 17:38:15.387575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.957 [2024-12-09 17:38:15.387582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.957 [2024-12-09 17:38:15.387588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.957 [2024-12-09 17:38:15.387604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-12-09 17:38:15.397504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.957 [2024-12-09 17:38:15.397559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.957 [2024-12-09 17:38:15.397573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.957 [2024-12-09 17:38:15.397580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.957 [2024-12-09 17:38:15.397586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.957 [2024-12-09 17:38:15.397602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-12-09 17:38:15.407534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.957 [2024-12-09 17:38:15.407591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.957 [2024-12-09 17:38:15.407604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.957 [2024-12-09 17:38:15.407611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.957 [2024-12-09 17:38:15.407618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.957 [2024-12-09 17:38:15.407633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-12-09 17:38:15.417572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.957 [2024-12-09 17:38:15.417632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.957 [2024-12-09 17:38:15.417645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.957 [2024-12-09 17:38:15.417652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.957 [2024-12-09 17:38:15.417658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.957 [2024-12-09 17:38:15.417673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-12-09 17:38:15.427588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.957 [2024-12-09 17:38:15.427642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.957 [2024-12-09 17:38:15.427656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.957 [2024-12-09 17:38:15.427663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.957 [2024-12-09 17:38:15.427669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.957 [2024-12-09 17:38:15.427684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-12-09 17:38:15.437640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.957 [2024-12-09 17:38:15.437708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.957 [2024-12-09 17:38:15.437722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.957 [2024-12-09 17:38:15.437729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.957 [2024-12-09 17:38:15.437735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.957 [2024-12-09 17:38:15.437750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-12-09 17:38:15.447648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.957 [2024-12-09 17:38:15.447705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.957 [2024-12-09 17:38:15.447718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.957 [2024-12-09 17:38:15.447725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.957 [2024-12-09 17:38:15.447731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.957 [2024-12-09 17:38:15.447746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-12-09 17:38:15.457686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.957 [2024-12-09 17:38:15.457753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.957 [2024-12-09 17:38:15.457766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.957 [2024-12-09 17:38:15.457773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.957 [2024-12-09 17:38:15.457779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.957 [2024-12-09 17:38:15.457795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-12-09 17:38:15.467704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.957 [2024-12-09 17:38:15.467760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.957 [2024-12-09 17:38:15.467774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.957 [2024-12-09 17:38:15.467784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.957 [2024-12-09 17:38:15.467791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.957 [2024-12-09 17:38:15.467806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-12-09 17:38:15.477731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.957 [2024-12-09 17:38:15.477788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.957 [2024-12-09 17:38:15.477801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.957 [2024-12-09 17:38:15.477808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.957 [2024-12-09 17:38:15.477814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.957 [2024-12-09 17:38:15.477829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-12-09 17:38:15.487818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.957 [2024-12-09 17:38:15.487876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.957 [2024-12-09 17:38:15.487888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.957 [2024-12-09 17:38:15.487895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.957 [2024-12-09 17:38:15.487901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:48.957 [2024-12-09 17:38:15.487915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.957 qpair failed and we were unable to recover it. 00:27:49.216 [2024-12-09 17:38:15.497805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.216 [2024-12-09 17:38:15.497860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.216 [2024-12-09 17:38:15.497873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.216 [2024-12-09 17:38:15.497880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.216 [2024-12-09 17:38:15.497886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.216 [2024-12-09 17:38:15.497901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.216 qpair failed and we were unable to recover it. 00:27:49.216 [2024-12-09 17:38:15.507846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.216 [2024-12-09 17:38:15.507902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.216 [2024-12-09 17:38:15.507915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.216 [2024-12-09 17:38:15.507921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.216 [2024-12-09 17:38:15.507928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.216 [2024-12-09 17:38:15.507946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.216 qpair failed and we were unable to recover it. 00:27:49.216 [2024-12-09 17:38:15.517860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.216 [2024-12-09 17:38:15.517917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.216 [2024-12-09 17:38:15.517930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.216 [2024-12-09 17:38:15.517937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.216 [2024-12-09 17:38:15.517943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.216 [2024-12-09 17:38:15.517958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.216 qpair failed and we were unable to recover it. 00:27:49.216 [2024-12-09 17:38:15.527941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.216 [2024-12-09 17:38:15.528003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.216 [2024-12-09 17:38:15.528016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.216 [2024-12-09 17:38:15.528023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.216 [2024-12-09 17:38:15.528030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.216 [2024-12-09 17:38:15.528045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.216 qpair failed and we were unable to recover it. 00:27:49.216 [2024-12-09 17:38:15.537993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.216 [2024-12-09 17:38:15.538050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.216 [2024-12-09 17:38:15.538065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.216 [2024-12-09 17:38:15.538072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.216 [2024-12-09 17:38:15.538078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.216 [2024-12-09 17:38:15.538095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.216 qpair failed and we were unable to recover it. 00:27:49.216 [2024-12-09 17:38:15.547942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.216 [2024-12-09 17:38:15.547992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.216 [2024-12-09 17:38:15.548005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.216 [2024-12-09 17:38:15.548012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.216 [2024-12-09 17:38:15.548018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.216 [2024-12-09 17:38:15.548034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.216 qpair failed and we were unable to recover it. 00:27:49.216 [2024-12-09 17:38:15.557968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.216 [2024-12-09 17:38:15.558048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.216 [2024-12-09 17:38:15.558062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.216 [2024-12-09 17:38:15.558069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.216 [2024-12-09 17:38:15.558076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.216 [2024-12-09 17:38:15.558091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.216 qpair failed and we were unable to recover it. 00:27:49.216 [2024-12-09 17:38:15.568002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.216 [2024-12-09 17:38:15.568059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.216 [2024-12-09 17:38:15.568073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.216 [2024-12-09 17:38:15.568079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.216 [2024-12-09 17:38:15.568086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.216 [2024-12-09 17:38:15.568101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.216 qpair failed and we were unable to recover it. 00:27:49.216 [2024-12-09 17:38:15.578027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.216 [2024-12-09 17:38:15.578081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.216 [2024-12-09 17:38:15.578094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.578101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.578108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.578123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.588056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.588105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.588118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.588125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.588132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.588147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.598077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.598128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.598146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.598153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.598160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.598180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.608119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.608176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.608190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.608196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.608202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.608218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.618153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.618221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.618235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.618244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.618253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.618271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.628186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.628247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.628260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.628267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.628273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.628288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.638116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.638202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.638216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.638223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.638232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.638247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.648232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.648285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.648298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.648306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.648312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.648328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.658283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.658371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.658384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.658393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.658399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.658414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.668298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.668356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.668369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.668375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.668382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.668398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.678340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.678393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.678407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.678413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.678419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.678436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.688348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.688404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.688417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.688424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.688430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.688445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.698372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.698425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.698438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.698445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.698451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.698467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.217 qpair failed and we were unable to recover it. 00:27:49.217 [2024-12-09 17:38:15.708403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.217 [2024-12-09 17:38:15.708459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.217 [2024-12-09 17:38:15.708472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.217 [2024-12-09 17:38:15.708480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.217 [2024-12-09 17:38:15.708486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.217 [2024-12-09 17:38:15.708502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.218 qpair failed and we were unable to recover it. 00:27:49.218 [2024-12-09 17:38:15.718429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.218 [2024-12-09 17:38:15.718483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.218 [2024-12-09 17:38:15.718498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.218 [2024-12-09 17:38:15.718504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.218 [2024-12-09 17:38:15.718511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.218 [2024-12-09 17:38:15.718526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.218 qpair failed and we were unable to recover it. 00:27:49.218 [2024-12-09 17:38:15.728512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.218 [2024-12-09 17:38:15.728567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.218 [2024-12-09 17:38:15.728584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.218 [2024-12-09 17:38:15.728591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.218 [2024-12-09 17:38:15.728597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.218 [2024-12-09 17:38:15.728613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.218 qpair failed and we were unable to recover it. 00:27:49.218 [2024-12-09 17:38:15.738488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.218 [2024-12-09 17:38:15.738538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.218 [2024-12-09 17:38:15.738552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.218 [2024-12-09 17:38:15.738559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.218 [2024-12-09 17:38:15.738565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.218 [2024-12-09 17:38:15.738581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.218 qpair failed and we were unable to recover it. 00:27:49.218 [2024-12-09 17:38:15.748512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.218 [2024-12-09 17:38:15.748574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.218 [2024-12-09 17:38:15.748587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.218 [2024-12-09 17:38:15.748595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.218 [2024-12-09 17:38:15.748601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.218 [2024-12-09 17:38:15.748616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.218 qpair failed and we were unable to recover it. 00:27:49.477 [2024-12-09 17:38:15.758579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.477 [2024-12-09 17:38:15.758639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.477 [2024-12-09 17:38:15.758652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.477 [2024-12-09 17:38:15.758659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.477 [2024-12-09 17:38:15.758666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.477 [2024-12-09 17:38:15.758681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.477 qpair failed and we were unable to recover it. 00:27:49.477 [2024-12-09 17:38:15.768597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.477 [2024-12-09 17:38:15.768651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.477 [2024-12-09 17:38:15.768664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.477 [2024-12-09 17:38:15.768671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.477 [2024-12-09 17:38:15.768680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.477 [2024-12-09 17:38:15.768695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.477 qpair failed and we were unable to recover it. 00:27:49.477 [2024-12-09 17:38:15.778594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.477 [2024-12-09 17:38:15.778661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.477 [2024-12-09 17:38:15.778674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.477 [2024-12-09 17:38:15.778681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.477 [2024-12-09 17:38:15.778688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.477 [2024-12-09 17:38:15.778702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.477 qpair failed and we were unable to recover it. 00:27:49.477 [2024-12-09 17:38:15.788659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.477 [2024-12-09 17:38:15.788722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.477 [2024-12-09 17:38:15.788735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.477 [2024-12-09 17:38:15.788743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.477 [2024-12-09 17:38:15.788749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.477 [2024-12-09 17:38:15.788764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.477 qpair failed and we were unable to recover it. 00:27:49.477 [2024-12-09 17:38:15.798651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.477 [2024-12-09 17:38:15.798707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.477 [2024-12-09 17:38:15.798721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.477 [2024-12-09 17:38:15.798728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.477 [2024-12-09 17:38:15.798735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.477 [2024-12-09 17:38:15.798751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.477 qpair failed and we were unable to recover it. 00:27:49.477 [2024-12-09 17:38:15.808684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.477 [2024-12-09 17:38:15.808752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.477 [2024-12-09 17:38:15.808765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.477 [2024-12-09 17:38:15.808772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.477 [2024-12-09 17:38:15.808778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.477 [2024-12-09 17:38:15.808793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.477 qpair failed and we were unable to recover it. 00:27:49.477 [2024-12-09 17:38:15.818772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.477 [2024-12-09 17:38:15.818874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.477 [2024-12-09 17:38:15.818889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.477 [2024-12-09 17:38:15.818896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.477 [2024-12-09 17:38:15.818901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.477 [2024-12-09 17:38:15.818917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.477 qpair failed and we were unable to recover it. 00:27:49.477 [2024-12-09 17:38:15.828778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.477 [2024-12-09 17:38:15.828845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.477 [2024-12-09 17:38:15.828858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.477 [2024-12-09 17:38:15.828865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.828871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.828886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.838768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.838821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.838834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.838841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.838847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.838862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.848808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.848866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.848879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.848885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.848892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.848907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.858849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.858909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.858925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.858932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.858938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.858953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.868829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.868914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.868927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.868934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.868940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.868955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.878866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.878971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.878984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.878991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.878997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.879012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.888916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.888972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.888985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.888992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.888998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.889013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.898976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.899035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.899048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.899058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.899064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.899079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.908964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.909016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.909029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.909036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.909043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.909058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.918923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.918984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.918998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.919006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.919012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.919027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.929056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.929164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.929180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.929187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.929193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.929208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.938981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.939044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.939057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.939064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.939070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.939088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.949107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.949164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.949181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.949188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.949194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.949210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.478 [2024-12-09 17:38:15.959102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.478 [2024-12-09 17:38:15.959154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.478 [2024-12-09 17:38:15.959170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.478 [2024-12-09 17:38:15.959178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.478 [2024-12-09 17:38:15.959184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.478 [2024-12-09 17:38:15.959199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.478 qpair failed and we were unable to recover it. 00:27:49.479 [2024-12-09 17:38:15.969135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.479 [2024-12-09 17:38:15.969200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.479 [2024-12-09 17:38:15.969213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.479 [2024-12-09 17:38:15.969220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.479 [2024-12-09 17:38:15.969226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.479 [2024-12-09 17:38:15.969241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.479 qpair failed and we were unable to recover it. 00:27:49.479 [2024-12-09 17:38:15.979159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.479 [2024-12-09 17:38:15.979212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.479 [2024-12-09 17:38:15.979225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.479 [2024-12-09 17:38:15.979232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.479 [2024-12-09 17:38:15.979238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.479 [2024-12-09 17:38:15.979253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.479 qpair failed and we were unable to recover it. 00:27:49.479 [2024-12-09 17:38:15.989161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.479 [2024-12-09 17:38:15.989241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.479 [2024-12-09 17:38:15.989255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.479 [2024-12-09 17:38:15.989262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.479 [2024-12-09 17:38:15.989268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.479 [2024-12-09 17:38:15.989283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.479 qpair failed and we were unable to recover it. 00:27:49.479 [2024-12-09 17:38:15.999241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.479 [2024-12-09 17:38:15.999309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.479 [2024-12-09 17:38:15.999322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.479 [2024-12-09 17:38:15.999328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.479 [2024-12-09 17:38:15.999334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.479 [2024-12-09 17:38:15.999349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.479 qpair failed and we were unable to recover it. 00:27:49.479 [2024-12-09 17:38:16.009263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.479 [2024-12-09 17:38:16.009317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.479 [2024-12-09 17:38:16.009330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.479 [2024-12-09 17:38:16.009337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.479 [2024-12-09 17:38:16.009344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.479 [2024-12-09 17:38:16.009359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.479 qpair failed and we were unable to recover it. 00:27:49.738 [2024-12-09 17:38:16.019271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.738 [2024-12-09 17:38:16.019330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.738 [2024-12-09 17:38:16.019344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.738 [2024-12-09 17:38:16.019351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.738 [2024-12-09 17:38:16.019358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.738 [2024-12-09 17:38:16.019373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.738 qpair failed and we were unable to recover it. 00:27:49.738 [2024-12-09 17:38:16.029325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.738 [2024-12-09 17:38:16.029384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.738 [2024-12-09 17:38:16.029397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.738 [2024-12-09 17:38:16.029409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.738 [2024-12-09 17:38:16.029415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.738 [2024-12-09 17:38:16.029430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.738 qpair failed and we were unable to recover it. 00:27:49.738 [2024-12-09 17:38:16.039368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.738 [2024-12-09 17:38:16.039432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.738 [2024-12-09 17:38:16.039445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.738 [2024-12-09 17:38:16.039452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.738 [2024-12-09 17:38:16.039458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.738 [2024-12-09 17:38:16.039473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.738 qpair failed and we were unable to recover it. 00:27:49.738 [2024-12-09 17:38:16.049385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.738 [2024-12-09 17:38:16.049454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.738 [2024-12-09 17:38:16.049467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.738 [2024-12-09 17:38:16.049474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.738 [2024-12-09 17:38:16.049481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.738 [2024-12-09 17:38:16.049496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.738 qpair failed and we were unable to recover it. 00:27:49.738 [2024-12-09 17:38:16.059412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.738 [2024-12-09 17:38:16.059467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.738 [2024-12-09 17:38:16.059480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.738 [2024-12-09 17:38:16.059487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.738 [2024-12-09 17:38:16.059493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.738 [2024-12-09 17:38:16.059508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.738 qpair failed and we were unable to recover it. 00:27:49.738 [2024-12-09 17:38:16.069436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.738 [2024-12-09 17:38:16.069491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.738 [2024-12-09 17:38:16.069503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.738 [2024-12-09 17:38:16.069511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.738 [2024-12-09 17:38:16.069517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.738 [2024-12-09 17:38:16.069535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.738 qpair failed and we were unable to recover it. 00:27:49.738 [2024-12-09 17:38:16.079462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.738 [2024-12-09 17:38:16.079526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.738 [2024-12-09 17:38:16.079540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.738 [2024-12-09 17:38:16.079546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.738 [2024-12-09 17:38:16.079553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.079568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.089501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.089560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.089572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.089579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.089586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.089601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.099515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.099569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.099582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.099589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.099595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.099610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.109545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.109600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.109613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.109620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.109626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.109641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.119572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.119628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.119641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.119648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.119654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.119669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.129621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.129679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.129693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.129701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.129707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.129722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.139690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.139782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.139795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.139801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.139807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.139823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.149721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.149768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.149781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.149788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.149794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.149809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.159670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.159737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.159753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.159759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.159765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.159781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.169730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.169786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.169799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.169806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.169813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.169828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.179740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.179794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.179807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.179814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.179821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.179837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.189819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.189876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.189889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.189896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.189903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.189917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.199747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.199844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.199857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.199864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.199874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.199889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.209836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.209891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.209904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.209911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.209918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.209933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.219898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.739 [2024-12-09 17:38:16.219955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.739 [2024-12-09 17:38:16.219969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.739 [2024-12-09 17:38:16.219975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.739 [2024-12-09 17:38:16.219982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.739 [2024-12-09 17:38:16.219997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.739 qpair failed and we were unable to recover it. 00:27:49.739 [2024-12-09 17:38:16.229875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.740 [2024-12-09 17:38:16.229931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.740 [2024-12-09 17:38:16.229945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.740 [2024-12-09 17:38:16.229952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.740 [2024-12-09 17:38:16.229958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.740 [2024-12-09 17:38:16.229974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.740 qpair failed and we were unable to recover it. 00:27:49.740 [2024-12-09 17:38:16.239903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.740 [2024-12-09 17:38:16.239956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.740 [2024-12-09 17:38:16.239969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.740 [2024-12-09 17:38:16.239976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.740 [2024-12-09 17:38:16.239983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.740 [2024-12-09 17:38:16.239998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.740 qpair failed and we were unable to recover it. 00:27:49.740 [2024-12-09 17:38:16.249984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.740 [2024-12-09 17:38:16.250046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.740 [2024-12-09 17:38:16.250059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.740 [2024-12-09 17:38:16.250066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.740 [2024-12-09 17:38:16.250072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.740 [2024-12-09 17:38:16.250087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.740 qpair failed and we were unable to recover it. 00:27:49.740 [2024-12-09 17:38:16.259966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.740 [2024-12-09 17:38:16.260019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.740 [2024-12-09 17:38:16.260033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.740 [2024-12-09 17:38:16.260040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.740 [2024-12-09 17:38:16.260046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.740 [2024-12-09 17:38:16.260061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.740 qpair failed and we were unable to recover it. 00:27:49.740 [2024-12-09 17:38:16.269995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.740 [2024-12-09 17:38:16.270050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.740 [2024-12-09 17:38:16.270063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.740 [2024-12-09 17:38:16.270070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.740 [2024-12-09 17:38:16.270077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.740 [2024-12-09 17:38:16.270092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.740 qpair failed and we were unable to recover it. 00:27:49.999 [2024-12-09 17:38:16.279964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.999 [2024-12-09 17:38:16.280037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.999 [2024-12-09 17:38:16.280050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.999 [2024-12-09 17:38:16.280057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.999 [2024-12-09 17:38:16.280063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.999 [2024-12-09 17:38:16.280078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.999 qpair failed and we were unable to recover it. 00:27:49.999 [2024-12-09 17:38:16.290121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.999 [2024-12-09 17:38:16.290223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.999 [2024-12-09 17:38:16.290240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.999 [2024-12-09 17:38:16.290247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.999 [2024-12-09 17:38:16.290253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.999 [2024-12-09 17:38:16.290269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.999 qpair failed and we were unable to recover it. 00:27:49.999 [2024-12-09 17:38:16.300163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.999 [2024-12-09 17:38:16.300255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.999 [2024-12-09 17:38:16.300268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.999 [2024-12-09 17:38:16.300275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.999 [2024-12-09 17:38:16.300282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.999 [2024-12-09 17:38:16.300297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.999 qpair failed and we were unable to recover it. 00:27:49.999 [2024-12-09 17:38:16.310083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.999 [2024-12-09 17:38:16.310141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.999 [2024-12-09 17:38:16.310155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.999 [2024-12-09 17:38:16.310162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.999 [2024-12-09 17:38:16.310173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.999 [2024-12-09 17:38:16.310188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.999 qpair failed and we were unable to recover it. 00:27:49.999 [2024-12-09 17:38:16.320137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.999 [2024-12-09 17:38:16.320210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.999 [2024-12-09 17:38:16.320224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.999 [2024-12-09 17:38:16.320232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.999 [2024-12-09 17:38:16.320238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.999 [2024-12-09 17:38:16.320253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.999 qpair failed and we were unable to recover it. 00:27:49.999 [2024-12-09 17:38:16.330160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.999 [2024-12-09 17:38:16.330247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.999 [2024-12-09 17:38:16.330260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.999 [2024-12-09 17:38:16.330266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.999 [2024-12-09 17:38:16.330277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.999 [2024-12-09 17:38:16.330293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.999 qpair failed and we were unable to recover it. 00:27:49.999 [2024-12-09 17:38:16.340178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.999 [2024-12-09 17:38:16.340229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.999 [2024-12-09 17:38:16.340242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.999 [2024-12-09 17:38:16.340249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.999 [2024-12-09 17:38:16.340255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.999 [2024-12-09 17:38:16.340271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.999 qpair failed and we were unable to recover it. 00:27:49.999 [2024-12-09 17:38:16.350265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.999 [2024-12-09 17:38:16.350317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.999 [2024-12-09 17:38:16.350330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.999 [2024-12-09 17:38:16.350337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.999 [2024-12-09 17:38:16.350343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:49.999 [2024-12-09 17:38:16.350359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.999 qpair failed and we were unable to recover it. 00:27:49.999 [2024-12-09 17:38:16.360239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.999 [2024-12-09 17:38:16.360292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.999 [2024-12-09 17:38:16.360305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.360312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.360319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.360333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.370337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.370445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.370458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.370465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.370471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.370486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.380287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.380342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.380355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.380361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.380367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.380382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.390291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.390343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.390356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.390363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.390370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.390385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.400280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.400331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.400344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.400351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.400357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.400372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.410428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.410484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.410497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.410504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.410510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.410525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.420442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.420503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.420520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.420526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.420533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.420548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.430403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.430454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.430466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.430474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.430480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.430495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.440409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.440459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.440473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.440480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.440486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.440501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.450527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.450589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.450603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.450610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.450616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.450632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.460516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.460602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.460617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.460628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.460635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.460650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.470594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.470649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.470663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.470670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.470676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.470691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.480578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.480631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.480644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.480651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.480658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.480674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.000 [2024-12-09 17:38:16.490568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.000 [2024-12-09 17:38:16.490650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.000 [2024-12-09 17:38:16.490664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.000 [2024-12-09 17:38:16.490671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.000 [2024-12-09 17:38:16.490677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.000 [2024-12-09 17:38:16.490693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.000 qpair failed and we were unable to recover it. 00:27:50.001 [2024-12-09 17:38:16.500664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.001 [2024-12-09 17:38:16.500718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.001 [2024-12-09 17:38:16.500731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.001 [2024-12-09 17:38:16.500738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.001 [2024-12-09 17:38:16.500744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.001 [2024-12-09 17:38:16.500762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.001 qpair failed and we were unable to recover it. 00:27:50.001 [2024-12-09 17:38:16.510688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.001 [2024-12-09 17:38:16.510752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.001 [2024-12-09 17:38:16.510765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.001 [2024-12-09 17:38:16.510772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.001 [2024-12-09 17:38:16.510778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.001 [2024-12-09 17:38:16.510792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.001 qpair failed and we were unable to recover it. 00:27:50.001 [2024-12-09 17:38:16.520666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.001 [2024-12-09 17:38:16.520721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.001 [2024-12-09 17:38:16.520735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.001 [2024-12-09 17:38:16.520741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.001 [2024-12-09 17:38:16.520748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.001 [2024-12-09 17:38:16.520763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.001 qpair failed and we were unable to recover it. 00:27:50.001 [2024-12-09 17:38:16.530750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.001 [2024-12-09 17:38:16.530830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.001 [2024-12-09 17:38:16.530844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.001 [2024-12-09 17:38:16.530851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.001 [2024-12-09 17:38:16.530857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.001 [2024-12-09 17:38:16.530872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.001 qpair failed and we were unable to recover it. 00:27:50.260 [2024-12-09 17:38:16.540798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.260 [2024-12-09 17:38:16.540888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.260 [2024-12-09 17:38:16.540901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.260 [2024-12-09 17:38:16.540908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.260 [2024-12-09 17:38:16.540914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.260 [2024-12-09 17:38:16.540928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.260 qpair failed and we were unable to recover it. 00:27:50.260 [2024-12-09 17:38:16.550755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.260 [2024-12-09 17:38:16.550817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.260 [2024-12-09 17:38:16.550831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.260 [2024-12-09 17:38:16.550837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.260 [2024-12-09 17:38:16.550844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.260 [2024-12-09 17:38:16.550859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.260 qpair failed and we were unable to recover it. 00:27:50.260 [2024-12-09 17:38:16.560873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.260 [2024-12-09 17:38:16.560933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.260 [2024-12-09 17:38:16.560946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.260 [2024-12-09 17:38:16.560954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.260 [2024-12-09 17:38:16.560960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.260 [2024-12-09 17:38:16.560975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.260 qpair failed and we were unable to recover it. 00:27:50.260 [2024-12-09 17:38:16.570832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.260 [2024-12-09 17:38:16.570918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.260 [2024-12-09 17:38:16.570931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.570938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.570944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.570959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.580900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.581009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.581021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.581028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.581033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.581048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.590885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.590943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.590956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.590966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.590972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.590987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.600995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.601074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.601088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.601094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.601100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.601115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.611042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.611101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.611114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.611121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.611127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.611142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.621050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.621106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.621120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.621127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.621133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.621149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.631036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.631105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.631119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.631125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.631132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.631150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.641111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.641162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.641181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.641188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.641194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.641210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.651127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.651196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.651209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.651217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.651223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.651238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.661138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.661197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.661210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.661217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.661223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.661239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.671108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.671163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.671180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.671187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.671193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.671209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.681170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.681228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.681241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.681248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.681254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.681270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.691211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.691270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.691282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.691289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.691296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.691311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.261 qpair failed and we were unable to recover it. 00:27:50.261 [2024-12-09 17:38:16.701234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.261 [2024-12-09 17:38:16.701286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.261 [2024-12-09 17:38:16.701299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.261 [2024-12-09 17:38:16.701306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.261 [2024-12-09 17:38:16.701312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.261 [2024-12-09 17:38:16.701327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.262 qpair failed and we were unable to recover it. 00:27:50.262 [2024-12-09 17:38:16.711263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.262 [2024-12-09 17:38:16.711335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.262 [2024-12-09 17:38:16.711349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.262 [2024-12-09 17:38:16.711356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.262 [2024-12-09 17:38:16.711362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.262 [2024-12-09 17:38:16.711377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.262 qpair failed and we were unable to recover it. 00:27:50.262 [2024-12-09 17:38:16.721353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.262 [2024-12-09 17:38:16.721450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.262 [2024-12-09 17:38:16.721469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.262 [2024-12-09 17:38:16.721476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.262 [2024-12-09 17:38:16.721482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.262 [2024-12-09 17:38:16.721498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.262 qpair failed and we were unable to recover it. 00:27:50.262 [2024-12-09 17:38:16.731316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.262 [2024-12-09 17:38:16.731370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.262 [2024-12-09 17:38:16.731384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.262 [2024-12-09 17:38:16.731390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.262 [2024-12-09 17:38:16.731396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.262 [2024-12-09 17:38:16.731412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.262 qpair failed and we were unable to recover it. 00:27:50.262 [2024-12-09 17:38:16.741351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.262 [2024-12-09 17:38:16.741409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.262 [2024-12-09 17:38:16.741422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.262 [2024-12-09 17:38:16.741429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.262 [2024-12-09 17:38:16.741436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.262 [2024-12-09 17:38:16.741450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.262 qpair failed and we were unable to recover it. 00:27:50.262 [2024-12-09 17:38:16.751379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.262 [2024-12-09 17:38:16.751434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.262 [2024-12-09 17:38:16.751447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.262 [2024-12-09 17:38:16.751454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.262 [2024-12-09 17:38:16.751461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.262 [2024-12-09 17:38:16.751476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.262 qpair failed and we were unable to recover it. 00:27:50.262 [2024-12-09 17:38:16.761395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.262 [2024-12-09 17:38:16.761447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.262 [2024-12-09 17:38:16.761460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.262 [2024-12-09 17:38:16.761466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.262 [2024-12-09 17:38:16.761476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.262 [2024-12-09 17:38:16.761491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.262 qpair failed and we were unable to recover it. 00:27:50.262 [2024-12-09 17:38:16.771449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.262 [2024-12-09 17:38:16.771535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.262 [2024-12-09 17:38:16.771547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.262 [2024-12-09 17:38:16.771554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.262 [2024-12-09 17:38:16.771560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.262 [2024-12-09 17:38:16.771574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.262 qpair failed and we were unable to recover it. 00:27:50.262 [2024-12-09 17:38:16.781456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.262 [2024-12-09 17:38:16.781512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.262 [2024-12-09 17:38:16.781526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.262 [2024-12-09 17:38:16.781533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.262 [2024-12-09 17:38:16.781539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.262 [2024-12-09 17:38:16.781555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.262 qpair failed and we were unable to recover it. 00:27:50.262 [2024-12-09 17:38:16.791476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.262 [2024-12-09 17:38:16.791527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.262 [2024-12-09 17:38:16.791540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.262 [2024-12-09 17:38:16.791547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.262 [2024-12-09 17:38:16.791554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.262 [2024-12-09 17:38:16.791569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.262 qpair failed and we were unable to recover it. 00:27:50.522 [2024-12-09 17:38:16.801533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.522 [2024-12-09 17:38:16.801590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.522 [2024-12-09 17:38:16.801603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.522 [2024-12-09 17:38:16.801610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.522 [2024-12-09 17:38:16.801616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.522 [2024-12-09 17:38:16.801631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.522 qpair failed and we were unable to recover it. 00:27:50.522 [2024-12-09 17:38:16.811580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.522 [2024-12-09 17:38:16.811656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.522 [2024-12-09 17:38:16.811669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.522 [2024-12-09 17:38:16.811676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.522 [2024-12-09 17:38:16.811682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.522 [2024-12-09 17:38:16.811697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.522 qpair failed and we were unable to recover it. 00:27:50.522 [2024-12-09 17:38:16.821584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.522 [2024-12-09 17:38:16.821641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.522 [2024-12-09 17:38:16.821655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.522 [2024-12-09 17:38:16.821662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.522 [2024-12-09 17:38:16.821668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.522 [2024-12-09 17:38:16.821683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.522 qpair failed and we were unable to recover it. 00:27:50.522 [2024-12-09 17:38:16.831643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.522 [2024-12-09 17:38:16.831698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.522 [2024-12-09 17:38:16.831711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.522 [2024-12-09 17:38:16.831718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.522 [2024-12-09 17:38:16.831724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.522 [2024-12-09 17:38:16.831739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.522 qpair failed and we were unable to recover it. 00:27:50.522 [2024-12-09 17:38:16.841624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.522 [2024-12-09 17:38:16.841680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.522 [2024-12-09 17:38:16.841694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.522 [2024-12-09 17:38:16.841700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.522 [2024-12-09 17:38:16.841707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.522 [2024-12-09 17:38:16.841722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.522 qpair failed and we were unable to recover it. 00:27:50.522 [2024-12-09 17:38:16.851721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.522 [2024-12-09 17:38:16.851777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.522 [2024-12-09 17:38:16.851793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.522 [2024-12-09 17:38:16.851800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.522 [2024-12-09 17:38:16.851806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.522 [2024-12-09 17:38:16.851821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.522 qpair failed and we were unable to recover it. 00:27:50.522 [2024-12-09 17:38:16.861673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.522 [2024-12-09 17:38:16.861730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.522 [2024-12-09 17:38:16.861743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.522 [2024-12-09 17:38:16.861750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.522 [2024-12-09 17:38:16.861756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.522 [2024-12-09 17:38:16.861771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.522 qpair failed and we were unable to recover it. 00:27:50.522 [2024-12-09 17:38:16.871718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.522 [2024-12-09 17:38:16.871773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.522 [2024-12-09 17:38:16.871786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.522 [2024-12-09 17:38:16.871793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.522 [2024-12-09 17:38:16.871799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.522 [2024-12-09 17:38:16.871814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.522 qpair failed and we were unable to recover it. 00:27:50.522 [2024-12-09 17:38:16.881726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.522 [2024-12-09 17:38:16.881791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.522 [2024-12-09 17:38:16.881808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.522 [2024-12-09 17:38:16.881815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.522 [2024-12-09 17:38:16.881821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.522 [2024-12-09 17:38:16.881841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.522 qpair failed and we were unable to recover it. 00:27:50.522 [2024-12-09 17:38:16.891835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.522 [2024-12-09 17:38:16.891940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.522 [2024-12-09 17:38:16.891954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.522 [2024-12-09 17:38:16.891960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.522 [2024-12-09 17:38:16.891969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.522 [2024-12-09 17:38:16.891985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.522 qpair failed and we were unable to recover it. 00:27:50.522 [2024-12-09 17:38:16.901834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.522 [2024-12-09 17:38:16.901895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.522 [2024-12-09 17:38:16.901909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.522 [2024-12-09 17:38:16.901916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.522 [2024-12-09 17:38:16.901922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.522 [2024-12-09 17:38:16.901937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.522 qpair failed and we were unable to recover it. 00:27:50.522 [2024-12-09 17:38:16.911841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.522 [2024-12-09 17:38:16.911928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.522 [2024-12-09 17:38:16.911941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.522 [2024-12-09 17:38:16.911947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.522 [2024-12-09 17:38:16.911953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.522 [2024-12-09 17:38:16.911969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.522 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:16.921858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:16.921921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:16.921935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:16.921942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:16.921948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:16.921965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:16.931953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:16.932008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:16.932020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:16.932027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:16.932035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:16.932051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:16.941928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:16.941987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:16.942000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:16.942009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:16.942015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:16.942031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:16.951947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:16.952027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:16.952041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:16.952049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:16.952056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:16.952071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:16.961975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:16.962028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:16.962042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:16.962048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:16.962055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:16.962070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:16.972036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:16.972119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:16.972132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:16.972140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:16.972146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:16.972160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:16.982040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:16.982100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:16.982113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:16.982120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:16.982126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:16.982140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:16.992065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:16.992117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:16.992129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:16.992136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:16.992142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:16.992157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:17.002091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:17.002141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:17.002154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:17.002161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:17.002170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:17.002185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:17.012108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:17.012174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:17.012187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:17.012194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:17.012202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:17.012217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:17.022156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:17.022211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:17.022224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:17.022234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:17.022241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:17.022256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:17.032180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:17.032235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:17.032248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:17.032255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:17.032261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:17.032277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:17.042222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:17.042276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:17.042289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.523 [2024-12-09 17:38:17.042296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.523 [2024-12-09 17:38:17.042303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.523 [2024-12-09 17:38:17.042318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.523 qpair failed and we were unable to recover it. 00:27:50.523 [2024-12-09 17:38:17.052250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.523 [2024-12-09 17:38:17.052304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.523 [2024-12-09 17:38:17.052317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.524 [2024-12-09 17:38:17.052323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.524 [2024-12-09 17:38:17.052331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.524 [2024-12-09 17:38:17.052346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.524 qpair failed and we were unable to recover it. 00:27:50.783 [2024-12-09 17:38:17.062290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.783 [2024-12-09 17:38:17.062368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.783 [2024-12-09 17:38:17.062382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.783 [2024-12-09 17:38:17.062389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.783 [2024-12-09 17:38:17.062395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.783 [2024-12-09 17:38:17.062413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.783 qpair failed and we were unable to recover it. 00:27:50.783 [2024-12-09 17:38:17.072310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.783 [2024-12-09 17:38:17.072368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.783 [2024-12-09 17:38:17.072381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.783 [2024-12-09 17:38:17.072388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.783 [2024-12-09 17:38:17.072395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.783 [2024-12-09 17:38:17.072410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.783 qpair failed and we were unable to recover it. 00:27:50.783 [2024-12-09 17:38:17.082258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.783 [2024-12-09 17:38:17.082320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.783 [2024-12-09 17:38:17.082333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.783 [2024-12-09 17:38:17.082340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.783 [2024-12-09 17:38:17.082346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.783 [2024-12-09 17:38:17.082361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.783 qpair failed and we were unable to recover it. 00:27:50.783 [2024-12-09 17:38:17.092359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.783 [2024-12-09 17:38:17.092417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.783 [2024-12-09 17:38:17.092429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.783 [2024-12-09 17:38:17.092436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.783 [2024-12-09 17:38:17.092442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.783 [2024-12-09 17:38:17.092457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.783 qpair failed and we were unable to recover it. 00:27:50.783 [2024-12-09 17:38:17.102433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.783 [2024-12-09 17:38:17.102493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.783 [2024-12-09 17:38:17.102506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.783 [2024-12-09 17:38:17.102515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.783 [2024-12-09 17:38:17.102522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.783 [2024-12-09 17:38:17.102537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.783 qpair failed and we were unable to recover it. 00:27:50.783 [2024-12-09 17:38:17.112415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.783 [2024-12-09 17:38:17.112472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.783 [2024-12-09 17:38:17.112486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.783 [2024-12-09 17:38:17.112493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.783 [2024-12-09 17:38:17.112499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.783 [2024-12-09 17:38:17.112514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.783 qpair failed and we were unable to recover it. 00:27:50.783 [2024-12-09 17:38:17.122437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.783 [2024-12-09 17:38:17.122489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.783 [2024-12-09 17:38:17.122502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.783 [2024-12-09 17:38:17.122509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.783 [2024-12-09 17:38:17.122515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.783 [2024-12-09 17:38:17.122531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.783 qpair failed and we were unable to recover it. 00:27:50.783 [2024-12-09 17:38:17.132479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.783 [2024-12-09 17:38:17.132563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.783 [2024-12-09 17:38:17.132578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.132584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.132591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.132606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.142505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.142560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.142573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.142580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.142586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.142602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.152539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.152596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.152609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.152619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.152625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.152641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.162554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.162606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.162619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.162626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.162633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.162648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.172594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.172649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.172662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.172669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.172675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.172690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.182626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.182677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.182691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.182698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.182704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.182720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.192645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.192704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.192717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.192724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.192731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.192748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.202666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.202721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.202735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.202741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.202748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.202762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.212743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.212821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.212834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.212842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.212848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.212863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.222767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.222826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.222839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.222846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.222853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.222869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.232751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.232806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.232819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.232826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.232832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.232847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.242780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.242831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.242844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.242851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.242858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.242873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.252814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.252874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.252887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.252894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.252900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.252915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.784 qpair failed and we were unable to recover it. 00:27:50.784 [2024-12-09 17:38:17.262836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.784 [2024-12-09 17:38:17.262886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.784 [2024-12-09 17:38:17.262899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.784 [2024-12-09 17:38:17.262905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.784 [2024-12-09 17:38:17.262911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.784 [2024-12-09 17:38:17.262926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.785 qpair failed and we were unable to recover it. 00:27:50.785 [2024-12-09 17:38:17.272869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.785 [2024-12-09 17:38:17.272917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.785 [2024-12-09 17:38:17.272930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.785 [2024-12-09 17:38:17.272937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.785 [2024-12-09 17:38:17.272943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.785 [2024-12-09 17:38:17.272958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.785 qpair failed and we were unable to recover it. 00:27:50.785 [2024-12-09 17:38:17.282893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.785 [2024-12-09 17:38:17.282944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.785 [2024-12-09 17:38:17.282960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.785 [2024-12-09 17:38:17.282967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.785 [2024-12-09 17:38:17.282973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.785 [2024-12-09 17:38:17.282988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.785 qpair failed and we were unable to recover it. 00:27:50.785 [2024-12-09 17:38:17.292927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.785 [2024-12-09 17:38:17.293017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.785 [2024-12-09 17:38:17.293030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.785 [2024-12-09 17:38:17.293037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.785 [2024-12-09 17:38:17.293043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.785 [2024-12-09 17:38:17.293058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.785 qpair failed and we were unable to recover it. 00:27:50.785 [2024-12-09 17:38:17.302953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.785 [2024-12-09 17:38:17.303008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.785 [2024-12-09 17:38:17.303021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.785 [2024-12-09 17:38:17.303028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.785 [2024-12-09 17:38:17.303035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.785 [2024-12-09 17:38:17.303050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.785 qpair failed and we were unable to recover it. 00:27:50.785 [2024-12-09 17:38:17.312984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.785 [2024-12-09 17:38:17.313037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.785 [2024-12-09 17:38:17.313051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.785 [2024-12-09 17:38:17.313057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.785 [2024-12-09 17:38:17.313064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:50.785 [2024-12-09 17:38:17.313079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.785 qpair failed and we were unable to recover it. 00:27:51.044 [2024-12-09 17:38:17.323023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.044 [2024-12-09 17:38:17.323083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.044 [2024-12-09 17:38:17.323097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.044 [2024-12-09 17:38:17.323104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.044 [2024-12-09 17:38:17.323113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.323128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.333063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.333123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.333136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.333142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.333149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.333164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.343067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.343123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.343137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.343144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.343150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.343169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.353094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.353149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.353162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.353173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.353180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.353195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.363122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.363173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.363186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.363193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.363200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.363216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.373157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.373230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.373244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.373251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.373257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.373272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.383183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.383240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.383252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.383260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.383266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.383282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.393197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.393254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.393267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.393274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.393281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.393296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.403247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.403301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.403314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.403321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.403327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.403342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.413267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.413325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.413342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.413349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.413355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.413370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.423276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.423334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.423348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.423355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.423361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.423377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.433233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.433290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.433303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.433310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.433316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.433331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.443334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.443407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.443420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.443427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.443433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.443449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.045 [2024-12-09 17:38:17.453402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.045 [2024-12-09 17:38:17.453460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.045 [2024-12-09 17:38:17.453473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.045 [2024-12-09 17:38:17.453480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.045 [2024-12-09 17:38:17.453489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.045 [2024-12-09 17:38:17.453505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.045 qpair failed and we were unable to recover it. 00:27:51.046 [2024-12-09 17:38:17.463403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.046 [2024-12-09 17:38:17.463461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.046 [2024-12-09 17:38:17.463475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.046 [2024-12-09 17:38:17.463481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.046 [2024-12-09 17:38:17.463487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.046 [2024-12-09 17:38:17.463503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.046 qpair failed and we were unable to recover it. 00:27:51.046 [2024-12-09 17:38:17.473464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.046 [2024-12-09 17:38:17.473523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.046 [2024-12-09 17:38:17.473536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.046 [2024-12-09 17:38:17.473543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.046 [2024-12-09 17:38:17.473549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.046 [2024-12-09 17:38:17.473564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.046 qpair failed and we were unable to recover it. 00:27:51.046 [2024-12-09 17:38:17.483450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.046 [2024-12-09 17:38:17.483510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.046 [2024-12-09 17:38:17.483523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.046 [2024-12-09 17:38:17.483530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.046 [2024-12-09 17:38:17.483536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.046 [2024-12-09 17:38:17.483551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.046 qpair failed and we were unable to recover it. 00:27:51.046 [2024-12-09 17:38:17.493485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.046 [2024-12-09 17:38:17.493578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.046 [2024-12-09 17:38:17.493591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.046 [2024-12-09 17:38:17.493598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.046 [2024-12-09 17:38:17.493605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.046 [2024-12-09 17:38:17.493619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.046 qpair failed and we were unable to recover it. 00:27:51.046 [2024-12-09 17:38:17.503512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.046 [2024-12-09 17:38:17.503566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.046 [2024-12-09 17:38:17.503579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.046 [2024-12-09 17:38:17.503585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.046 [2024-12-09 17:38:17.503592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.046 [2024-12-09 17:38:17.503608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.046 qpair failed and we were unable to recover it. 00:27:51.046 [2024-12-09 17:38:17.513542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.046 [2024-12-09 17:38:17.513605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.046 [2024-12-09 17:38:17.513618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.046 [2024-12-09 17:38:17.513624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.046 [2024-12-09 17:38:17.513631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.046 [2024-12-09 17:38:17.513646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.046 qpair failed and we were unable to recover it. 00:27:51.046 [2024-12-09 17:38:17.523643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.046 [2024-12-09 17:38:17.523735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.046 [2024-12-09 17:38:17.523749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.046 [2024-12-09 17:38:17.523755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.046 [2024-12-09 17:38:17.523762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.046 [2024-12-09 17:38:17.523779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.046 qpair failed and we were unable to recover it. 00:27:51.046 [2024-12-09 17:38:17.533609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.046 [2024-12-09 17:38:17.533663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.046 [2024-12-09 17:38:17.533677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.046 [2024-12-09 17:38:17.533684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.046 [2024-12-09 17:38:17.533690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.046 [2024-12-09 17:38:17.533706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.046 qpair failed and we were unable to recover it. 00:27:51.046 [2024-12-09 17:38:17.543623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.046 [2024-12-09 17:38:17.543683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.046 [2024-12-09 17:38:17.543696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.046 [2024-12-09 17:38:17.543703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.046 [2024-12-09 17:38:17.543709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.046 [2024-12-09 17:38:17.543725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.046 qpair failed and we were unable to recover it. 00:27:51.046 [2024-12-09 17:38:17.553697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.046 [2024-12-09 17:38:17.553767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.046 [2024-12-09 17:38:17.553781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.046 [2024-12-09 17:38:17.553787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.046 [2024-12-09 17:38:17.553793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.046 [2024-12-09 17:38:17.553809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.046 qpair failed and we were unable to recover it. 00:27:51.046 [2024-12-09 17:38:17.563674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.046 [2024-12-09 17:38:17.563726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.046 [2024-12-09 17:38:17.563739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.046 [2024-12-09 17:38:17.563746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.046 [2024-12-09 17:38:17.563752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.046 [2024-12-09 17:38:17.563767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.046 qpair failed and we were unable to recover it. 00:27:51.046 [2024-12-09 17:38:17.573688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.046 [2024-12-09 17:38:17.573764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.046 [2024-12-09 17:38:17.573777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.046 [2024-12-09 17:38:17.573784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.046 [2024-12-09 17:38:17.573790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.046 [2024-12-09 17:38:17.573806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.046 qpair failed and we were unable to recover it. 00:27:51.306 [2024-12-09 17:38:17.583751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.306 [2024-12-09 17:38:17.583814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.306 [2024-12-09 17:38:17.583827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.306 [2024-12-09 17:38:17.583837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.306 [2024-12-09 17:38:17.583843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.306 [2024-12-09 17:38:17.583858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.306 qpair failed and we were unable to recover it. 00:27:51.306 [2024-12-09 17:38:17.593790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.306 [2024-12-09 17:38:17.593845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.306 [2024-12-09 17:38:17.593858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.306 [2024-12-09 17:38:17.593865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.306 [2024-12-09 17:38:17.593871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.306 [2024-12-09 17:38:17.593886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.306 qpair failed and we were unable to recover it. 00:27:51.306 [2024-12-09 17:38:17.603867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.306 [2024-12-09 17:38:17.603923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.306 [2024-12-09 17:38:17.603936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.306 [2024-12-09 17:38:17.603942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.306 [2024-12-09 17:38:17.603949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.306 [2024-12-09 17:38:17.603964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.306 qpair failed and we were unable to recover it. 00:27:51.306 [2024-12-09 17:38:17.613823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.306 [2024-12-09 17:38:17.613882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.306 [2024-12-09 17:38:17.613895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.306 [2024-12-09 17:38:17.613902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.306 [2024-12-09 17:38:17.613908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.306 [2024-12-09 17:38:17.613922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.306 qpair failed and we were unable to recover it. 00:27:51.306 [2024-12-09 17:38:17.623845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.306 [2024-12-09 17:38:17.623950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.306 [2024-12-09 17:38:17.623964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.306 [2024-12-09 17:38:17.623971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.306 [2024-12-09 17:38:17.623978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.306 [2024-12-09 17:38:17.623996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.306 qpair failed and we were unable to recover it. 00:27:51.306 [2024-12-09 17:38:17.633873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.306 [2024-12-09 17:38:17.633925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.306 [2024-12-09 17:38:17.633939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.306 [2024-12-09 17:38:17.633946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.306 [2024-12-09 17:38:17.633953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.306 [2024-12-09 17:38:17.633968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.306 qpair failed and we were unable to recover it. 00:27:51.306 [2024-12-09 17:38:17.643903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.306 [2024-12-09 17:38:17.643954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.306 [2024-12-09 17:38:17.643967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.306 [2024-12-09 17:38:17.643974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.306 [2024-12-09 17:38:17.643981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.306 [2024-12-09 17:38:17.643997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.306 qpair failed and we were unable to recover it. 00:27:51.306 [2024-12-09 17:38:17.653941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.306 [2024-12-09 17:38:17.653997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.306 [2024-12-09 17:38:17.654010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.306 [2024-12-09 17:38:17.654018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.306 [2024-12-09 17:38:17.654024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.306 [2024-12-09 17:38:17.654039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.306 qpair failed and we were unable to recover it. 00:27:51.306 [2024-12-09 17:38:17.663985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.306 [2024-12-09 17:38:17.664044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.306 [2024-12-09 17:38:17.664057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.306 [2024-12-09 17:38:17.664064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.306 [2024-12-09 17:38:17.664071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.306 [2024-12-09 17:38:17.664085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.306 qpair failed and we were unable to recover it. 00:27:51.306 [2024-12-09 17:38:17.673987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.674043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.674056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.674063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.674069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.674085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.684010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.684064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.684076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.684083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.684089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.684104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.694055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.694110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.694124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.694130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.694137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.694152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.704081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.704141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.704155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.704161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.704171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.704187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.714039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.714092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.714108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.714115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.714121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.714137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.724130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.724229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.724244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.724251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.724257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.724272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.734173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.734229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.734242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.734249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.734255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.734271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.744110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.744180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.744194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.744201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.744206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.744222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.754246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.754306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.754319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.754326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.754332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.754351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.764182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.764234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.764248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.764255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.764260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.764276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.774278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.774334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.774347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.774354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.774360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.774375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.784250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.784307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.784320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.784326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.784333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.784348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.794374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.794427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.794442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.794450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.307 [2024-12-09 17:38:17.794458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.307 [2024-12-09 17:38:17.794475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.307 qpair failed and we were unable to recover it. 00:27:51.307 [2024-12-09 17:38:17.804305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.307 [2024-12-09 17:38:17.804408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.307 [2024-12-09 17:38:17.804420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.307 [2024-12-09 17:38:17.804427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.308 [2024-12-09 17:38:17.804433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.308 [2024-12-09 17:38:17.804448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.308 qpair failed and we were unable to recover it. 00:27:51.308 [2024-12-09 17:38:17.814396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.308 [2024-12-09 17:38:17.814452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.308 [2024-12-09 17:38:17.814465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.308 [2024-12-09 17:38:17.814471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.308 [2024-12-09 17:38:17.814478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.308 [2024-12-09 17:38:17.814493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.308 qpair failed and we were unable to recover it. 00:27:51.308 [2024-12-09 17:38:17.824351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.308 [2024-12-09 17:38:17.824407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.308 [2024-12-09 17:38:17.824421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.308 [2024-12-09 17:38:17.824427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.308 [2024-12-09 17:38:17.824434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.308 [2024-12-09 17:38:17.824449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.308 qpair failed and we were unable to recover it. 00:27:51.308 [2024-12-09 17:38:17.834506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.308 [2024-12-09 17:38:17.834589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.308 [2024-12-09 17:38:17.834602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.308 [2024-12-09 17:38:17.834609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.308 [2024-12-09 17:38:17.834615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.308 [2024-12-09 17:38:17.834630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.308 qpair failed and we were unable to recover it. 00:27:51.308 [2024-12-09 17:38:17.844470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.308 [2024-12-09 17:38:17.844530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.308 [2024-12-09 17:38:17.844546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.308 [2024-12-09 17:38:17.844553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.308 [2024-12-09 17:38:17.844559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.308 [2024-12-09 17:38:17.844574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.308 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.854531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.854592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.854605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.854612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.854618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.567 [2024-12-09 17:38:17.854633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.567 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.864546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.864605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.864618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.864625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.864632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.567 [2024-12-09 17:38:17.864647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.567 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.874500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.874555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.874568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.874574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.874581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.567 [2024-12-09 17:38:17.874596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.567 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.884565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.884616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.884629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.884635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.884645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.567 [2024-12-09 17:38:17.884659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.567 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.894617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.894673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.894686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.894693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.894699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.567 [2024-12-09 17:38:17.894714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.567 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.904637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.904693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.904706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.904713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.904720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.567 [2024-12-09 17:38:17.904734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.567 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.914592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.914648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.914661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.914667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.914674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.567 [2024-12-09 17:38:17.914689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.567 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.924673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.924741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.924755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.924762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.924768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.567 [2024-12-09 17:38:17.924783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.567 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.934713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.934768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.934781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.934787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.934794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.567 [2024-12-09 17:38:17.934809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.567 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.944737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.944830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.944843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.944850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.944856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.567 [2024-12-09 17:38:17.944871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.567 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.954710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.954763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.954776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.954783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.954789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.567 [2024-12-09 17:38:17.954805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.567 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.964813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.964889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.964902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.964909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.964915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.567 [2024-12-09 17:38:17.964930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.567 qpair failed and we were unable to recover it. 00:27:51.567 [2024-12-09 17:38:17.974905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.567 [2024-12-09 17:38:17.974959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.567 [2024-12-09 17:38:17.974975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.567 [2024-12-09 17:38:17.974982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.567 [2024-12-09 17:38:17.974988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:17.975003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:17.984816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:17.984914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:17.984926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:17.984933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:17.984939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:17.984954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:17.994823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:17.994888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:17.994901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:17.994908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:17.994914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:17.994930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:18.004856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:18.004913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:18.004926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:18.004934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:18.004940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:18.004955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:18.014885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:18.014943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:18.014956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:18.014968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:18.014974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:18.014989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:18.024958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:18.025037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:18.025051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:18.025058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:18.025064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:18.025079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:18.034995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:18.035052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:18.035065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:18.035072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:18.035079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:18.035095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:18.044962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:18.045021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:18.045034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:18.045041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:18.045048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:18.045063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:18.054994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:18.055055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:18.055068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:18.055075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:18.055082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:18.055097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:18.065144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:18.065233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:18.065247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:18.065254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:18.065260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:18.065275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:18.075041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:18.075101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:18.075114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:18.075121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:18.075127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:18.075143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:18.085195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:18.085248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:18.085261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:18.085268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:18.085274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:18.085290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:18.095191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:18.095285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:18.095298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:18.095304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:18.095311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:18.095326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.568 [2024-12-09 17:38:18.105142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.568 [2024-12-09 17:38:18.105223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.568 [2024-12-09 17:38:18.105237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.568 [2024-12-09 17:38:18.105244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.568 [2024-12-09 17:38:18.105250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.568 [2024-12-09 17:38:18.105264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.568 qpair failed and we were unable to recover it. 00:27:51.827 [2024-12-09 17:38:18.115241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.827 [2024-12-09 17:38:18.115302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.827 [2024-12-09 17:38:18.115314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.827 [2024-12-09 17:38:18.115321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.827 [2024-12-09 17:38:18.115327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.827 [2024-12-09 17:38:18.115343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.827 qpair failed and we were unable to recover it. 00:27:51.827 [2024-12-09 17:38:18.125256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.827 [2024-12-09 17:38:18.125310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.827 [2024-12-09 17:38:18.125325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.827 [2024-12-09 17:38:18.125331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.827 [2024-12-09 17:38:18.125338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.827 [2024-12-09 17:38:18.125354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.827 qpair failed and we were unable to recover it. 00:27:51.827 [2024-12-09 17:38:18.135339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.827 [2024-12-09 17:38:18.135398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.827 [2024-12-09 17:38:18.135411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.827 [2024-12-09 17:38:18.135418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.827 [2024-12-09 17:38:18.135425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.827 [2024-12-09 17:38:18.135440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.827 qpair failed and we were unable to recover it. 00:27:51.827 [2024-12-09 17:38:18.145314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.827 [2024-12-09 17:38:18.145372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.827 [2024-12-09 17:38:18.145387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.827 [2024-12-09 17:38:18.145398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.827 [2024-12-09 17:38:18.145404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.827 [2024-12-09 17:38:18.145419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.827 qpair failed and we were unable to recover it. 00:27:51.827 [2024-12-09 17:38:18.155318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.827 [2024-12-09 17:38:18.155383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.155396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.155403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.155409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.155424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.165398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.165453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.165466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.165473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.165479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.165495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.175416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.175485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.175498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.175505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.175511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.175525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.185428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.185498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.185512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.185519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.185525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.185543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.195483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.195546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.195559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.195566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.195572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.195587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.205401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.205451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.205464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.205471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.205477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.205492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.215546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.215601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.215615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.215622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.215629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.215644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.225460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.225511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.225525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.225532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.225538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.225552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.235552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.235611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.235625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.235632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.235639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.235654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.245583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.245637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.245651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.245657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.245663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.245679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.255545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.255604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.255618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.255624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.255630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.255645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.265642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.265697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.265711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.265718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.265724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.265739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.275637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.275730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.275749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.275758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.275765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.828 [2024-12-09 17:38:18.275780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.828 qpair failed and we were unable to recover it. 00:27:51.828 [2024-12-09 17:38:18.285636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.828 [2024-12-09 17:38:18.285691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.828 [2024-12-09 17:38:18.285704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.828 [2024-12-09 17:38:18.285711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.828 [2024-12-09 17:38:18.285717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.829 [2024-12-09 17:38:18.285732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.829 qpair failed and we were unable to recover it. 00:27:51.829 [2024-12-09 17:38:18.295738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.829 [2024-12-09 17:38:18.295836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.829 [2024-12-09 17:38:18.295849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.829 [2024-12-09 17:38:18.295856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.829 [2024-12-09 17:38:18.295863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.829 [2024-12-09 17:38:18.295878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.829 qpair failed and we were unable to recover it. 00:27:51.829 [2024-12-09 17:38:18.305787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.829 [2024-12-09 17:38:18.305850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.829 [2024-12-09 17:38:18.305864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.829 [2024-12-09 17:38:18.305871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.829 [2024-12-09 17:38:18.305877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.829 [2024-12-09 17:38:18.305892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.829 qpair failed and we were unable to recover it. 00:27:51.829 [2024-12-09 17:38:18.315794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.829 [2024-12-09 17:38:18.315846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.829 [2024-12-09 17:38:18.315859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.829 [2024-12-09 17:38:18.315866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.829 [2024-12-09 17:38:18.315872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.829 [2024-12-09 17:38:18.315890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.829 qpair failed and we were unable to recover it. 00:27:51.829 [2024-12-09 17:38:18.325808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.829 [2024-12-09 17:38:18.325860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.829 [2024-12-09 17:38:18.325873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.829 [2024-12-09 17:38:18.325879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.829 [2024-12-09 17:38:18.325886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.829 [2024-12-09 17:38:18.325901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.829 qpair failed and we were unable to recover it. 00:27:51.829 [2024-12-09 17:38:18.335835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.829 [2024-12-09 17:38:18.335887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.829 [2024-12-09 17:38:18.335901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.829 [2024-12-09 17:38:18.335908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.829 [2024-12-09 17:38:18.335914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.829 [2024-12-09 17:38:18.335929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.829 qpair failed and we were unable to recover it. 00:27:51.829 [2024-12-09 17:38:18.345871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.829 [2024-12-09 17:38:18.345965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.829 [2024-12-09 17:38:18.345978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.829 [2024-12-09 17:38:18.345985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.829 [2024-12-09 17:38:18.345991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.829 [2024-12-09 17:38:18.346006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.829 qpair failed and we were unable to recover it. 00:27:51.829 [2024-12-09 17:38:18.355910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.829 [2024-12-09 17:38:18.355965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.829 [2024-12-09 17:38:18.355978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.829 [2024-12-09 17:38:18.355985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.829 [2024-12-09 17:38:18.355992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.829 [2024-12-09 17:38:18.356008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.829 qpair failed and we were unable to recover it. 00:27:51.829 [2024-12-09 17:38:18.365964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.829 [2024-12-09 17:38:18.366069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.829 [2024-12-09 17:38:18.366083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.829 [2024-12-09 17:38:18.366089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.829 [2024-12-09 17:38:18.366096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:51.829 [2024-12-09 17:38:18.366110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.829 qpair failed and we were unable to recover it. 00:27:52.088 [2024-12-09 17:38:18.376001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.088 [2024-12-09 17:38:18.376071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.088 [2024-12-09 17:38:18.376084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.088 [2024-12-09 17:38:18.376091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.088 [2024-12-09 17:38:18.376098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.088 [2024-12-09 17:38:18.376112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.088 qpair failed and we were unable to recover it. 00:27:52.088 [2024-12-09 17:38:18.385983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.088 [2024-12-09 17:38:18.386035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.088 [2024-12-09 17:38:18.386048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.088 [2024-12-09 17:38:18.386055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.088 [2024-12-09 17:38:18.386061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.088 [2024-12-09 17:38:18.386078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.088 qpair failed and we were unable to recover it. 00:27:52.088 [2024-12-09 17:38:18.396039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.088 [2024-12-09 17:38:18.396095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.088 [2024-12-09 17:38:18.396108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.088 [2024-12-09 17:38:18.396116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.088 [2024-12-09 17:38:18.396122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.088 [2024-12-09 17:38:18.396137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.088 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.406099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.406153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.406172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.406179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.406186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.406202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.416048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.416152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.416170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.416178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.416184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.416201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.426123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.426179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.426193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.426200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.426206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.426222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.436151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.436209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.436222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.436229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.436237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.436251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.446178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.446234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.446247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.446254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.446263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.446279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.456216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.456275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.456288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.456295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.456301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.456317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.466263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.466323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.466337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.466344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.466350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.466365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.476307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.476367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.476380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.476387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.476393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.476408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.486280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.486331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.486344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.486351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.486357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.486373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.496325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.496381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.496393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.496400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.496406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.496422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.506344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.506397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.506410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.506417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.506423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.506438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.516390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.516445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.516458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.516464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.516471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.516485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.526398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.526451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.526465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.089 [2024-12-09 17:38:18.526472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.089 [2024-12-09 17:38:18.526478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.089 [2024-12-09 17:38:18.526494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.089 qpair failed and we were unable to recover it. 00:27:52.089 [2024-12-09 17:38:18.536432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.089 [2024-12-09 17:38:18.536485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.089 [2024-12-09 17:38:18.536501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.090 [2024-12-09 17:38:18.536508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.090 [2024-12-09 17:38:18.536514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.090 [2024-12-09 17:38:18.536529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.090 qpair failed and we were unable to recover it. 00:27:52.090 [2024-12-09 17:38:18.546479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.090 [2024-12-09 17:38:18.546535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.090 [2024-12-09 17:38:18.546548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.090 [2024-12-09 17:38:18.546555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.090 [2024-12-09 17:38:18.546562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.090 [2024-12-09 17:38:18.546576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.090 qpair failed and we were unable to recover it. 00:27:52.090 [2024-12-09 17:38:18.556490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.090 [2024-12-09 17:38:18.556573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.090 [2024-12-09 17:38:18.556587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.090 [2024-12-09 17:38:18.556594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.090 [2024-12-09 17:38:18.556601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.090 [2024-12-09 17:38:18.556617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.090 qpair failed and we were unable to recover it. 00:27:52.090 [2024-12-09 17:38:18.566516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.090 [2024-12-09 17:38:18.566580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.090 [2024-12-09 17:38:18.566592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.090 [2024-12-09 17:38:18.566600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.090 [2024-12-09 17:38:18.566605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.090 [2024-12-09 17:38:18.566620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.090 qpair failed and we were unable to recover it. 00:27:52.090 [2024-12-09 17:38:18.576562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.090 [2024-12-09 17:38:18.576617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.090 [2024-12-09 17:38:18.576630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.090 [2024-12-09 17:38:18.576640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.090 [2024-12-09 17:38:18.576646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.090 [2024-12-09 17:38:18.576661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.090 qpair failed and we were unable to recover it. 00:27:52.090 [2024-12-09 17:38:18.586523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.090 [2024-12-09 17:38:18.586582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.090 [2024-12-09 17:38:18.586596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.090 [2024-12-09 17:38:18.586603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.090 [2024-12-09 17:38:18.586609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.090 [2024-12-09 17:38:18.586624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.090 qpair failed and we were unable to recover it. 00:27:52.090 [2024-12-09 17:38:18.596635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.090 [2024-12-09 17:38:18.596717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.090 [2024-12-09 17:38:18.596730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.090 [2024-12-09 17:38:18.596737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.090 [2024-12-09 17:38:18.596744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.090 [2024-12-09 17:38:18.596758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.090 qpair failed and we were unable to recover it. 00:27:52.090 [2024-12-09 17:38:18.606634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.090 [2024-12-09 17:38:18.606685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.090 [2024-12-09 17:38:18.606698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.090 [2024-12-09 17:38:18.606705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.090 [2024-12-09 17:38:18.606712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.090 [2024-12-09 17:38:18.606726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.090 qpair failed and we were unable to recover it. 00:27:52.090 [2024-12-09 17:38:18.616647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.090 [2024-12-09 17:38:18.616703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.090 [2024-12-09 17:38:18.616716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.090 [2024-12-09 17:38:18.616724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.090 [2024-12-09 17:38:18.616730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.090 [2024-12-09 17:38:18.616745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.090 qpair failed and we were unable to recover it. 00:27:52.090 [2024-12-09 17:38:18.626693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.090 [2024-12-09 17:38:18.626752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.090 [2024-12-09 17:38:18.626766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.090 [2024-12-09 17:38:18.626772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.090 [2024-12-09 17:38:18.626778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.090 [2024-12-09 17:38:18.626794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.090 qpair failed and we were unable to recover it. 00:27:52.350 [2024-12-09 17:38:18.636768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.350 [2024-12-09 17:38:18.636826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.350 [2024-12-09 17:38:18.636839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.350 [2024-12-09 17:38:18.636846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.350 [2024-12-09 17:38:18.636852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.350 [2024-12-09 17:38:18.636867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.350 qpair failed and we were unable to recover it. 00:27:52.350 [2024-12-09 17:38:18.646678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.350 [2024-12-09 17:38:18.646755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.350 [2024-12-09 17:38:18.646769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.350 [2024-12-09 17:38:18.646775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.350 [2024-12-09 17:38:18.646781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.350 [2024-12-09 17:38:18.646796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.350 qpair failed and we were unable to recover it. 00:27:52.350 [2024-12-09 17:38:18.656776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.350 [2024-12-09 17:38:18.656835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.350 [2024-12-09 17:38:18.656848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.350 [2024-12-09 17:38:18.656855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.350 [2024-12-09 17:38:18.656861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.350 [2024-12-09 17:38:18.656877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.350 qpair failed and we were unable to recover it. 00:27:52.350 [2024-12-09 17:38:18.666802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.350 [2024-12-09 17:38:18.666861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.350 [2024-12-09 17:38:18.666874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.350 [2024-12-09 17:38:18.666881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.350 [2024-12-09 17:38:18.666888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.350 [2024-12-09 17:38:18.666904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.350 qpair failed and we were unable to recover it. 00:27:52.350 [2024-12-09 17:38:18.676750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.350 [2024-12-09 17:38:18.676804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.350 [2024-12-09 17:38:18.676818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.350 [2024-12-09 17:38:18.676824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.350 [2024-12-09 17:38:18.676830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.350 [2024-12-09 17:38:18.676846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.350 qpair failed and we were unable to recover it. 00:27:52.350 [2024-12-09 17:38:18.686845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.350 [2024-12-09 17:38:18.686922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.350 [2024-12-09 17:38:18.686935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.350 [2024-12-09 17:38:18.686942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.350 [2024-12-09 17:38:18.686948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.350 [2024-12-09 17:38:18.686963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.350 qpair failed and we were unable to recover it. 00:27:52.350 [2024-12-09 17:38:18.696935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.350 [2024-12-09 17:38:18.696989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.350 [2024-12-09 17:38:18.697003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.350 [2024-12-09 17:38:18.697010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.350 [2024-12-09 17:38:18.697015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.350 [2024-12-09 17:38:18.697031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.350 qpair failed and we were unable to recover it. 00:27:52.350 [2024-12-09 17:38:18.706916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.350 [2024-12-09 17:38:18.706987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.350 [2024-12-09 17:38:18.707001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.350 [2024-12-09 17:38:18.707011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.350 [2024-12-09 17:38:18.707017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.350 [2024-12-09 17:38:18.707032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.350 qpair failed and we were unable to recover it. 00:27:52.350 [2024-12-09 17:38:18.716937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.350 [2024-12-09 17:38:18.717009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.350 [2024-12-09 17:38:18.717023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.350 [2024-12-09 17:38:18.717029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.350 [2024-12-09 17:38:18.717035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.350 [2024-12-09 17:38:18.717051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.350 qpair failed and we were unable to recover it. 00:27:52.350 [2024-12-09 17:38:18.727028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.350 [2024-12-09 17:38:18.727083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.350 [2024-12-09 17:38:18.727097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.350 [2024-12-09 17:38:18.727104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.350 [2024-12-09 17:38:18.727110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.350 [2024-12-09 17:38:18.727126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.350 qpair failed and we were unable to recover it. 00:27:52.350 [2024-12-09 17:38:18.737008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.350 [2024-12-09 17:38:18.737074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.350 [2024-12-09 17:38:18.737087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.350 [2024-12-09 17:38:18.737094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.737100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.737115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.747018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.747073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.747086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.747093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.747099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.747118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.757057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.757108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.757122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.757129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.757135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.757151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.767078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.767133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.767146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.767153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.767159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.767178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.777121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.777205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.777219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.777225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.777231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.777246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.787142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.787200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.787214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.787221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.787227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.787243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.797146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.797221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.797235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.797242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.797248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.797264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.807194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.807267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.807280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.807286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.807292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.807307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.817231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.817285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.817299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.817305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.817311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.817326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.827244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.827316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.827331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.827337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.827343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.827359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.837237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.837294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.837311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.837318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.837324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.837341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.847312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.847372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.847385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.847392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.847398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.847413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.857346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.857431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.857444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.857451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.857457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.857472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.351 [2024-12-09 17:38:18.867401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.351 [2024-12-09 17:38:18.867467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.351 [2024-12-09 17:38:18.867481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.351 [2024-12-09 17:38:18.867488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.351 [2024-12-09 17:38:18.867494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.351 [2024-12-09 17:38:18.867510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.351 qpair failed and we were unable to recover it. 00:27:52.352 [2024-12-09 17:38:18.877394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.352 [2024-12-09 17:38:18.877449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.352 [2024-12-09 17:38:18.877462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.352 [2024-12-09 17:38:18.877469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.352 [2024-12-09 17:38:18.877480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.352 [2024-12-09 17:38:18.877495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.352 qpair failed and we were unable to recover it. 00:27:52.352 [2024-12-09 17:38:18.887430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.352 [2024-12-09 17:38:18.887489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.352 [2024-12-09 17:38:18.887502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.352 [2024-12-09 17:38:18.887508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.352 [2024-12-09 17:38:18.887515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.352 [2024-12-09 17:38:18.887530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.352 qpair failed and we were unable to recover it. 00:27:52.611 [2024-12-09 17:38:18.897472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.611 [2024-12-09 17:38:18.897531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.611 [2024-12-09 17:38:18.897545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.611 [2024-12-09 17:38:18.897551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.611 [2024-12-09 17:38:18.897558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.611 [2024-12-09 17:38:18.897573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.611 qpair failed and we were unable to recover it. 00:27:52.611 [2024-12-09 17:38:18.907478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.611 [2024-12-09 17:38:18.907542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.611 [2024-12-09 17:38:18.907555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.611 [2024-12-09 17:38:18.907562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.611 [2024-12-09 17:38:18.907568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.611 [2024-12-09 17:38:18.907584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.611 qpair failed and we were unable to recover it. 00:27:52.611 [2024-12-09 17:38:18.917507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.611 [2024-12-09 17:38:18.917562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.611 [2024-12-09 17:38:18.917575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.611 [2024-12-09 17:38:18.917582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.611 [2024-12-09 17:38:18.917588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.611 [2024-12-09 17:38:18.917603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.611 qpair failed and we were unable to recover it. 00:27:52.611 [2024-12-09 17:38:18.927469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.611 [2024-12-09 17:38:18.927524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.611 [2024-12-09 17:38:18.927540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.611 [2024-12-09 17:38:18.927548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.611 [2024-12-09 17:38:18.927554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.611 [2024-12-09 17:38:18.927570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.611 qpair failed and we were unable to recover it. 00:27:52.611 [2024-12-09 17:38:18.937582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.611 [2024-12-09 17:38:18.937653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.611 [2024-12-09 17:38:18.937666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.611 [2024-12-09 17:38:18.937673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.611 [2024-12-09 17:38:18.937679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.611 [2024-12-09 17:38:18.937695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.611 qpair failed and we were unable to recover it. 00:27:52.611 [2024-12-09 17:38:18.947641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.611 [2024-12-09 17:38:18.947695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.611 [2024-12-09 17:38:18.947708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:18.947716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:18.947723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:18.947738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:18.957615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:18.957668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:18.957681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:18.957688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:18.957694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:18.957709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:18.967640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:18.967694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:18.967710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:18.967717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:18.967724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:18.967739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:18.977671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:18.977728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:18.977741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:18.977749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:18.977755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:18.977769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:18.987675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:18.987730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:18.987743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:18.987750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:18.987757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:18.987772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:18.997757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:18.997825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:18.997839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:18.997846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:18.997852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:18.997866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:19.007778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:19.007845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:19.007858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:19.007865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:19.007873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:19.007889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:19.017793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:19.017849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:19.017862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:19.017869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:19.017875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:19.017891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:19.027750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:19.027817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:19.027830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:19.027838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:19.027844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:19.027859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:19.037840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:19.037890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:19.037903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:19.037910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:19.037916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:19.037932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:19.047871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:19.047928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:19.047942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:19.047949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:19.047956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:19.047970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:19.057853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:19.057912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:19.057925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:19.057932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:19.057939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:19.057954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:19.067939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:19.067993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:19.068006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:19.068013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:19.068019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:19.068035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.612 qpair failed and we were unable to recover it. 00:27:52.612 [2024-12-09 17:38:19.077955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.612 [2024-12-09 17:38:19.078015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.612 [2024-12-09 17:38:19.078028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.612 [2024-12-09 17:38:19.078035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.612 [2024-12-09 17:38:19.078042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.612 [2024-12-09 17:38:19.078057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.613 qpair failed and we were unable to recover it. 00:27:52.613 [2024-12-09 17:38:19.087984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.613 [2024-12-09 17:38:19.088042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.613 [2024-12-09 17:38:19.088056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.613 [2024-12-09 17:38:19.088063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.613 [2024-12-09 17:38:19.088069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.613 [2024-12-09 17:38:19.088084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.613 qpair failed and we were unable to recover it. 00:27:52.613 [2024-12-09 17:38:19.098016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.613 [2024-12-09 17:38:19.098073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.613 [2024-12-09 17:38:19.098090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.613 [2024-12-09 17:38:19.098098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.613 [2024-12-09 17:38:19.098105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.613 [2024-12-09 17:38:19.098120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.613 qpair failed and we were unable to recover it. 00:27:52.613 [2024-12-09 17:38:19.108043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.613 [2024-12-09 17:38:19.108095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.613 [2024-12-09 17:38:19.108109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.613 [2024-12-09 17:38:19.108117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.613 [2024-12-09 17:38:19.108124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.613 [2024-12-09 17:38:19.108140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.613 qpair failed and we were unable to recover it. 00:27:52.613 [2024-12-09 17:38:19.118099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.613 [2024-12-09 17:38:19.118156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.613 [2024-12-09 17:38:19.118174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.613 [2024-12-09 17:38:19.118181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.613 [2024-12-09 17:38:19.118187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.613 [2024-12-09 17:38:19.118203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.613 qpair failed and we were unable to recover it. 00:27:52.613 [2024-12-09 17:38:19.128182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.613 [2024-12-09 17:38:19.128240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.613 [2024-12-09 17:38:19.128255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.613 [2024-12-09 17:38:19.128262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.613 [2024-12-09 17:38:19.128268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.613 [2024-12-09 17:38:19.128284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.613 qpair failed and we were unable to recover it. 00:27:52.613 [2024-12-09 17:38:19.138052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.613 [2024-12-09 17:38:19.138110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.613 [2024-12-09 17:38:19.138123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.613 [2024-12-09 17:38:19.138134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.613 [2024-12-09 17:38:19.138140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.613 [2024-12-09 17:38:19.138155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.613 qpair failed and we were unable to recover it. 00:27:52.613 [2024-12-09 17:38:19.148200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.613 [2024-12-09 17:38:19.148278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.613 [2024-12-09 17:38:19.148292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.613 [2024-12-09 17:38:19.148298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.613 [2024-12-09 17:38:19.148305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.613 [2024-12-09 17:38:19.148319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.613 qpair failed and we were unable to recover it. 00:27:52.872 [2024-12-09 17:38:19.158234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.872 [2024-12-09 17:38:19.158294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.872 [2024-12-09 17:38:19.158308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.872 [2024-12-09 17:38:19.158315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.872 [2024-12-09 17:38:19.158321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.872 [2024-12-09 17:38:19.158336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-12-09 17:38:19.168139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.872 [2024-12-09 17:38:19.168198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.872 [2024-12-09 17:38:19.168212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.872 [2024-12-09 17:38:19.168220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.872 [2024-12-09 17:38:19.168226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.872 [2024-12-09 17:38:19.168241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-12-09 17:38:19.178245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.178303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.178315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.178322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.178329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.178344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.188268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.188343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.188357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.188364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.188370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.188386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.198338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.198395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.198407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.198414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.198421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.198436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.208283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.208382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.208395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.208402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.208408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.208423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.218382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.218460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.218474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.218481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.218488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.218503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.228353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.228411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.228425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.228432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.228438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.228453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.238335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.238396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.238408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.238415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.238422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.238437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.248485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.248573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.248586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.248593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.248601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.248616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.258446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.258502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.258515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.258522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.258528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.258543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.268502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.268601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.268614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.268624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.268631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.268646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.278506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.278559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.278571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.278578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.278584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.278599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.288474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.288529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.288542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.288549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.288555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.288570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.298572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.298628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.298641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.298648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.298653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.873 [2024-12-09 17:38:19.298668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-12-09 17:38:19.308596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.873 [2024-12-09 17:38:19.308650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.873 [2024-12-09 17:38:19.308663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.873 [2024-12-09 17:38:19.308670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.873 [2024-12-09 17:38:19.308676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.874 [2024-12-09 17:38:19.308694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-12-09 17:38:19.318625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.874 [2024-12-09 17:38:19.318677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.874 [2024-12-09 17:38:19.318690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.874 [2024-12-09 17:38:19.318697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.874 [2024-12-09 17:38:19.318703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.874 [2024-12-09 17:38:19.318718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-12-09 17:38:19.328612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.874 [2024-12-09 17:38:19.328668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.874 [2024-12-09 17:38:19.328682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.874 [2024-12-09 17:38:19.328689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.874 [2024-12-09 17:38:19.328695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.874 [2024-12-09 17:38:19.328711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-12-09 17:38:19.338753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.874 [2024-12-09 17:38:19.338848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.874 [2024-12-09 17:38:19.338862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.874 [2024-12-09 17:38:19.338869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.874 [2024-12-09 17:38:19.338875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.874 [2024-12-09 17:38:19.338890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-12-09 17:38:19.348715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.874 [2024-12-09 17:38:19.348771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.874 [2024-12-09 17:38:19.348784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.874 [2024-12-09 17:38:19.348790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.874 [2024-12-09 17:38:19.348796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.874 [2024-12-09 17:38:19.348812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-12-09 17:38:19.358676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.874 [2024-12-09 17:38:19.358729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.874 [2024-12-09 17:38:19.358742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.874 [2024-12-09 17:38:19.358748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.874 [2024-12-09 17:38:19.358754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.874 [2024-12-09 17:38:19.358771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-12-09 17:38:19.368767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.874 [2024-12-09 17:38:19.368821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.874 [2024-12-09 17:38:19.368834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.874 [2024-12-09 17:38:19.368841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.874 [2024-12-09 17:38:19.368848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.874 [2024-12-09 17:38:19.368862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-12-09 17:38:19.378767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.874 [2024-12-09 17:38:19.378824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.874 [2024-12-09 17:38:19.378838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.874 [2024-12-09 17:38:19.378845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.874 [2024-12-09 17:38:19.378851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.874 [2024-12-09 17:38:19.378865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-12-09 17:38:19.388765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.874 [2024-12-09 17:38:19.388831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.874 [2024-12-09 17:38:19.388844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.874 [2024-12-09 17:38:19.388851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.874 [2024-12-09 17:38:19.388857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.874 [2024-12-09 17:38:19.388871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-12-09 17:38:19.398893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.874 [2024-12-09 17:38:19.398991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.874 [2024-12-09 17:38:19.399008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.874 [2024-12-09 17:38:19.399015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.874 [2024-12-09 17:38:19.399021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.874 [2024-12-09 17:38:19.399036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-12-09 17:38:19.408956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.874 [2024-12-09 17:38:19.409014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.874 [2024-12-09 17:38:19.409028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.874 [2024-12-09 17:38:19.409035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.874 [2024-12-09 17:38:19.409041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:52.874 [2024-12-09 17:38:19.409056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.874 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 17:38:19.419000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.133 [2024-12-09 17:38:19.419109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.133 [2024-12-09 17:38:19.419122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.133 [2024-12-09 17:38:19.419129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.133 [2024-12-09 17:38:19.419136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.133 [2024-12-09 17:38:19.419151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 17:38:19.428988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.429046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.429060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.429067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.429073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.429088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.438988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.439043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.439056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.439063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.439073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.439087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.449026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.449076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.449089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.449096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.449102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.449117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.459070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.459132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.459145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.459151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.459158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.459176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.469088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.469143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.469156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.469163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.469174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.469190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.479093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.479149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.479161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.479173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.479179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.479195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.489125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.489196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.489209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.489215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.489222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.489237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.499143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.499213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.499227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.499234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.499240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.499254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.509163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.509240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.509253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.509260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.509266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.509281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.519188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.519238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.519252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.519258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.519264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.519279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.529218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.529271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.529290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.529297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.529304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.529320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.539258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.539315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.539328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.539335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.539342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.539357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.549279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.549333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.549345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.134 [2024-12-09 17:38:19.549352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.134 [2024-12-09 17:38:19.549359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.134 [2024-12-09 17:38:19.549374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 17:38:19.559304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.134 [2024-12-09 17:38:19.559357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.134 [2024-12-09 17:38:19.559369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.135 [2024-12-09 17:38:19.559377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.135 [2024-12-09 17:38:19.559383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.135 [2024-12-09 17:38:19.559399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 17:38:19.569417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.135 [2024-12-09 17:38:19.569471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.135 [2024-12-09 17:38:19.569485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.135 [2024-12-09 17:38:19.569492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.135 [2024-12-09 17:38:19.569502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.135 [2024-12-09 17:38:19.569518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 17:38:19.579375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.135 [2024-12-09 17:38:19.579430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.135 [2024-12-09 17:38:19.579442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.135 [2024-12-09 17:38:19.579449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.135 [2024-12-09 17:38:19.579456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.135 [2024-12-09 17:38:19.579471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 17:38:19.589395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.135 [2024-12-09 17:38:19.589448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.135 [2024-12-09 17:38:19.589461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.135 [2024-12-09 17:38:19.589468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.135 [2024-12-09 17:38:19.589474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.135 [2024-12-09 17:38:19.589490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 17:38:19.599430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.135 [2024-12-09 17:38:19.599517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.135 [2024-12-09 17:38:19.599530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.135 [2024-12-09 17:38:19.599537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.135 [2024-12-09 17:38:19.599543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.135 [2024-12-09 17:38:19.599558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 17:38:19.609448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.135 [2024-12-09 17:38:19.609505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.135 [2024-12-09 17:38:19.609518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.135 [2024-12-09 17:38:19.609525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.135 [2024-12-09 17:38:19.609532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.135 [2024-12-09 17:38:19.609547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 17:38:19.619480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.135 [2024-12-09 17:38:19.619535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.135 [2024-12-09 17:38:19.619549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.135 [2024-12-09 17:38:19.619555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.135 [2024-12-09 17:38:19.619561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.135 [2024-12-09 17:38:19.619576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 17:38:19.629518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.135 [2024-12-09 17:38:19.629605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.135 [2024-12-09 17:38:19.629619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.135 [2024-12-09 17:38:19.629626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.135 [2024-12-09 17:38:19.629632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:27:53.135 [2024-12-09 17:38:19.629647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 17:38:19.639559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.135 [2024-12-09 17:38:19.639665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.135 [2024-12-09 17:38:19.639719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.135 [2024-12-09 17:38:19.639745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.135 [2024-12-09 17:38:19.639765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f0000b90 00:27:53.135 [2024-12-09 17:38:19.639816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 17:38:19.649524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.135 [2024-12-09 17:38:19.649609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.135 [2024-12-09 17:38:19.649636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.135 [2024-12-09 17:38:19.649651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.135 [2024-12-09 17:38:19.649664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f0000b90 00:27:53.135 [2024-12-09 17:38:19.649695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 17:38:19.659612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.135 [2024-12-09 17:38:19.659712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.135 [2024-12-09 17:38:19.659778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.135 [2024-12-09 17:38:19.659803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.135 [2024-12-09 17:38:19.659824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30ec000b90 00:27:53.135 [2024-12-09 17:38:19.659875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 17:38:19.669674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.135 [2024-12-09 17:38:19.669793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.135 [2024-12-09 17:38:19.669852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.135 [2024-12-09 17:38:19.669879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.135 [2024-12-09 17:38:19.669901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f261a0 00:27:53.135 [2024-12-09 17:38:19.669952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.393 [2024-12-09 17:38:19.679689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.393 [2024-12-09 17:38:19.679780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.393 [2024-12-09 17:38:19.679808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.393 [2024-12-09 17:38:19.679824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.393 [2024-12-09 17:38:19.679837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f261a0 00:27:53.393 [2024-12-09 17:38:19.679868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:53.393 qpair failed and we were unable to recover it. 00:27:53.394 [2024-12-09 17:38:19.680035] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:53.394 A controller has encountered a failure and is being reset. 00:27:53.394 [2024-12-09 17:38:19.689707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.394 [2024-12-09 17:38:19.689799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.394 [2024-12-09 17:38:19.689840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.394 [2024-12-09 17:38:19.689863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.394 [2024-12-09 17:38:19.689882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30ec000b90 00:27:53.394 [2024-12-09 17:38:19.689929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.394 qpair failed and we were unable to recover it. 00:27:53.394 Controller properly reset. 00:27:53.394 Initializing NVMe Controllers 00:27:53.394 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:53.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:53.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:53.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:53.394 Initialization complete. Launching workers. 00:27:53.394 Starting thread on core 1 00:27:53.394 Starting thread on core 2 00:27:53.394 Starting thread on core 3 00:27:53.394 Starting thread on core 0 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:53.394 00:27:53.394 real 0m10.838s 00:27:53.394 user 0m19.186s 00:27:53.394 sys 0m4.927s 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.394 ************************************ 00:27:53.394 END TEST nvmf_target_disconnect_tc2 00:27:53.394 ************************************ 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:53.394 rmmod nvme_tcp 00:27:53.394 rmmod nvme_fabrics 00:27:53.394 rmmod nvme_keyring 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:53.394 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2061801 ']' 00:27:53.653 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2061801 00:27:53.653 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2061801 ']' 00:27:53.653 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2061801 00:27:53.653 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:53.653 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.653 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2061801 00:27:53.653 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:53.653 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:53.653 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2061801' 00:27:53.653 killing process with pid 2061801 00:27:53.653 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2061801 00:27:53.653 17:38:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2061801 00:27:53.653 17:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:53.653 17:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:53.653 17:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:53.653 17:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:53.653 17:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:53.653 17:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:53.653 17:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:53.653 17:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:53.653 17:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:53.653 17:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.653 17:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.653 17:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.188 17:38:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:56.188 00:27:56.188 real 0m19.616s 00:27:56.188 user 0m47.077s 00:27:56.188 sys 0m9.894s 00:27:56.188 17:38:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:56.188 17:38:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:56.188 ************************************ 00:27:56.188 END TEST nvmf_target_disconnect 00:27:56.188 ************************************ 00:27:56.188 17:38:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:56.188 00:27:56.188 real 5m49.882s 00:27:56.188 user 10m29.785s 00:27:56.188 sys 1m58.131s 00:27:56.188 17:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:56.188 17:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.188 ************************************ 00:27:56.188 END TEST nvmf_host 00:27:56.188 ************************************ 00:27:56.188 17:38:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:56.188 17:38:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:56.188 17:38:22 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:56.188 17:38:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:56.188 17:38:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.188 17:38:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:56.188 ************************************ 00:27:56.188 START TEST nvmf_target_core_interrupt_mode 00:27:56.188 ************************************ 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:56.188 * Looking for test storage... 00:27:56.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:56.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.188 --rc genhtml_branch_coverage=1 00:27:56.188 --rc genhtml_function_coverage=1 00:27:56.188 --rc genhtml_legend=1 00:27:56.188 --rc geninfo_all_blocks=1 00:27:56.188 --rc geninfo_unexecuted_blocks=1 00:27:56.188 00:27:56.188 ' 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:56.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.188 --rc genhtml_branch_coverage=1 00:27:56.188 --rc genhtml_function_coverage=1 00:27:56.188 --rc genhtml_legend=1 00:27:56.188 --rc geninfo_all_blocks=1 00:27:56.188 --rc geninfo_unexecuted_blocks=1 00:27:56.188 00:27:56.188 ' 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:56.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.188 --rc genhtml_branch_coverage=1 00:27:56.188 --rc genhtml_function_coverage=1 00:27:56.188 --rc genhtml_legend=1 00:27:56.188 --rc geninfo_all_blocks=1 00:27:56.188 --rc geninfo_unexecuted_blocks=1 00:27:56.188 00:27:56.188 ' 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:56.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.188 --rc genhtml_branch_coverage=1 00:27:56.188 --rc genhtml_function_coverage=1 00:27:56.188 --rc genhtml_legend=1 00:27:56.188 --rc geninfo_all_blocks=1 00:27:56.188 --rc geninfo_unexecuted_blocks=1 00:27:56.188 00:27:56.188 ' 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.188 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:56.189 ************************************ 00:27:56.189 START TEST nvmf_abort 00:27:56.189 ************************************ 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:56.189 * Looking for test storage... 00:27:56.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:27:56.189 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.448 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.448 --rc genhtml_branch_coverage=1 00:27:56.449 --rc genhtml_function_coverage=1 00:27:56.449 --rc genhtml_legend=1 00:27:56.449 --rc geninfo_all_blocks=1 00:27:56.449 --rc geninfo_unexecuted_blocks=1 00:27:56.449 00:27:56.449 ' 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:56.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.449 --rc genhtml_branch_coverage=1 00:27:56.449 --rc genhtml_function_coverage=1 00:27:56.449 --rc genhtml_legend=1 00:27:56.449 --rc geninfo_all_blocks=1 00:27:56.449 --rc geninfo_unexecuted_blocks=1 00:27:56.449 00:27:56.449 ' 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:56.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.449 --rc genhtml_branch_coverage=1 00:27:56.449 --rc genhtml_function_coverage=1 00:27:56.449 --rc genhtml_legend=1 00:27:56.449 --rc geninfo_all_blocks=1 00:27:56.449 --rc geninfo_unexecuted_blocks=1 00:27:56.449 00:27:56.449 ' 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:56.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.449 --rc genhtml_branch_coverage=1 00:27:56.449 --rc genhtml_function_coverage=1 00:27:56.449 --rc genhtml_legend=1 00:27:56.449 --rc geninfo_all_blocks=1 00:27:56.449 --rc geninfo_unexecuted_blocks=1 00:27:56.449 00:27:56.449 ' 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:56.449 17:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:03.019 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:03.020 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:03.020 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:03.020 Found net devices under 0000:af:00.0: cvl_0_0 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:03.020 Found net devices under 0000:af:00.1: cvl_0_1 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:03.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:28:03.020 00:28:03.020 --- 10.0.0.2 ping statistics --- 00:28:03.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.020 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:28:03.020 00:28:03.020 --- 10.0.0.1 ping statistics --- 00:28:03.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.020 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2066461 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2066461 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2066461 ']' 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.020 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:03.020 [2024-12-09 17:38:28.754256] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:03.020 [2024-12-09 17:38:28.755152] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:28:03.020 [2024-12-09 17:38:28.755190] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.020 [2024-12-09 17:38:28.830651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:03.020 [2024-12-09 17:38:28.870247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.021 [2024-12-09 17:38:28.870282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.021 [2024-12-09 17:38:28.870289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.021 [2024-12-09 17:38:28.870295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.021 [2024-12-09 17:38:28.870300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.021 [2024-12-09 17:38:28.871507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.021 [2024-12-09 17:38:28.871616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.021 [2024-12-09 17:38:28.871617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.021 [2024-12-09 17:38:28.937760] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:03.021 [2024-12-09 17:38:28.938478] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:03.021 [2024-12-09 17:38:28.938601] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:03.021 [2024-12-09 17:38:28.938762] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:03.021 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:03.021 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:03.021 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:03.021 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:03.021 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.021 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:03.021 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.021 17:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 [2024-12-09 17:38:29.004477] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 Malloc0 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 Delay0 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 [2024-12-09 17:38:29.088384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.021 17:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:03.021 [2024-12-09 17:38:29.260248] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:04.923 Initializing NVMe Controllers 00:28:04.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:04.923 controller IO queue size 128 less than required 00:28:04.923 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:04.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:04.923 Initialization complete. Launching workers. 00:28:04.923 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37319 00:28:04.923 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37376, failed to submit 66 00:28:04.923 success 37319, unsuccessful 57, failed 0 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:04.923 rmmod nvme_tcp 00:28:04.923 rmmod nvme_fabrics 00:28:04.923 rmmod nvme_keyring 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2066461 ']' 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2066461 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2066461 ']' 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2066461 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:04.923 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2066461 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2066461' 00:28:05.182 killing process with pid 2066461 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2066461 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2066461 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.182 17:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.717 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:07.718 00:28:07.718 real 0m11.134s 00:28:07.718 user 0m10.527s 00:28:07.718 sys 0m5.755s 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:07.718 ************************************ 00:28:07.718 END TEST nvmf_abort 00:28:07.718 ************************************ 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:07.718 ************************************ 00:28:07.718 START TEST nvmf_ns_hotplug_stress 00:28:07.718 ************************************ 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:07.718 * Looking for test storage... 00:28:07.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:07.718 17:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:07.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.718 --rc genhtml_branch_coverage=1 00:28:07.718 --rc genhtml_function_coverage=1 00:28:07.718 --rc genhtml_legend=1 00:28:07.718 --rc geninfo_all_blocks=1 00:28:07.718 --rc geninfo_unexecuted_blocks=1 00:28:07.718 00:28:07.718 ' 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:07.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.718 --rc genhtml_branch_coverage=1 00:28:07.718 --rc genhtml_function_coverage=1 00:28:07.718 --rc genhtml_legend=1 00:28:07.718 --rc geninfo_all_blocks=1 00:28:07.718 --rc geninfo_unexecuted_blocks=1 00:28:07.718 00:28:07.718 ' 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:07.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.718 --rc genhtml_branch_coverage=1 00:28:07.718 --rc genhtml_function_coverage=1 00:28:07.718 --rc genhtml_legend=1 00:28:07.718 --rc geninfo_all_blocks=1 00:28:07.718 --rc geninfo_unexecuted_blocks=1 00:28:07.718 00:28:07.718 ' 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:07.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.718 --rc genhtml_branch_coverage=1 00:28:07.718 --rc genhtml_function_coverage=1 00:28:07.718 --rc genhtml_legend=1 00:28:07.718 --rc geninfo_all_blocks=1 00:28:07.718 --rc geninfo_unexecuted_blocks=1 00:28:07.718 00:28:07.718 ' 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.718 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.719 17:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.287 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:14.288 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:14.288 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:14.288 Found net devices under 0000:af:00.0: cvl_0_0 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:14.288 Found net devices under 0000:af:00.1: cvl_0_1 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:14.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:28:14.288 00:28:14.288 --- 10.0.0.2 ping statistics --- 00:28:14.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.288 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:28:14.288 00:28:14.288 --- 10.0.0.1 ping statistics --- 00:28:14.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.288 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2070379 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2070379 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2070379 ']' 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.288 17:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:14.288 [2024-12-09 17:38:39.933335] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:14.288 [2024-12-09 17:38:39.934164] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:28:14.288 [2024-12-09 17:38:39.934218] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.289 [2024-12-09 17:38:39.997016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:14.289 [2024-12-09 17:38:40.042761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.289 [2024-12-09 17:38:40.042797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.289 [2024-12-09 17:38:40.042805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.289 [2024-12-09 17:38:40.042811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.289 [2024-12-09 17:38:40.042816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.289 [2024-12-09 17:38:40.043975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.289 [2024-12-09 17:38:40.044080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.289 [2024-12-09 17:38:40.044082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.289 [2024-12-09 17:38:40.113096] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:14.289 [2024-12-09 17:38:40.113947] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:14.289 [2024-12-09 17:38:40.114157] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:14.289 [2024-12-09 17:38:40.114268] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:14.289 17:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.289 17:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:14.289 17:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:14.289 17:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:14.289 17:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:14.289 17:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.289 17:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:14.289 17:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:14.289 [2024-12-09 17:38:40.360883] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.289 17:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:14.289 17:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:14.289 [2024-12-09 17:38:40.749247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.289 17:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:14.561 17:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:14.866 Malloc0 00:28:14.866 17:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:14.866 Delay0 00:28:14.866 17:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.124 17:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:15.383 NULL1 00:28:15.383 17:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:15.383 17:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:15.383 17:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2070691 00:28:15.383 17:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:15.642 17:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.642 17:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.900 17:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:15.900 17:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:16.159 true 00:28:16.159 17:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:16.159 17:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.417 17:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.676 17:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:16.676 17:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:16.676 true 00:28:16.676 17:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:16.676 17:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.934 17:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.193 17:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:17.193 17:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:17.451 true 00:28:17.451 17:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:17.451 17:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.709 17:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.967 17:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:17.967 17:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:17.967 true 00:28:17.968 17:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:17.968 17:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.226 17:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.484 17:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:18.484 17:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:18.743 true 00:28:18.743 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:18.743 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.001 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.259 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:19.259 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:19.259 true 00:28:19.259 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:19.259 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.518 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.776 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:19.776 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:20.035 true 00:28:20.035 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:20.035 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.293 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.552 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:20.552 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:20.552 true 00:28:20.810 17:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:20.810 17:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.811 17:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:21.069 17:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:21.069 17:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:21.327 true 00:28:21.327 17:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:21.327 17:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.586 17:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:21.845 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:21.845 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:21.845 true 00:28:22.103 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:22.103 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.103 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.362 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:22.362 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:22.621 true 00:28:22.621 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:22.621 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.880 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.138 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:23.138 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:23.138 true 00:28:23.397 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:23.397 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.397 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.656 17:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:23.656 17:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:23.914 true 00:28:23.914 17:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:23.914 17:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.173 17:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.431 17:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:24.431 17:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:24.690 true 00:28:24.690 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:24.690 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.690 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.948 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:24.948 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:25.207 true 00:28:25.207 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:25.207 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.466 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.724 17:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:25.724 17:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:25.983 true 00:28:25.983 17:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:25.983 17:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.983 17:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.241 17:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:26.241 17:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:26.503 true 00:28:26.503 17:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:26.503 17:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.764 17:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.023 17:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:27.023 17:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:27.281 true 00:28:27.281 17:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:27.281 17:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.281 17:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.539 17:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:27.540 17:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:27.798 true 00:28:27.798 17:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:27.798 17:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.057 17:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.315 17:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:28.315 17:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:28.574 true 00:28:28.574 17:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:28.574 17:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.574 17:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.832 17:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:28.832 17:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:29.090 true 00:28:29.090 17:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:29.090 17:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.348 17:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.607 17:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:29.607 17:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:29.607 true 00:28:29.865 17:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:29.865 17:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.865 17:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.124 17:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:30.124 17:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:30.383 true 00:28:30.383 17:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:30.383 17:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.641 17:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.900 17:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:30.900 17:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:31.159 true 00:28:31.159 17:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:31.159 17:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.159 17:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.417 17:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:31.417 17:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:31.676 true 00:28:31.676 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:31.676 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.935 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.193 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:32.193 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:32.193 true 00:28:32.451 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:32.451 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.451 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.709 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:32.709 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:32.967 true 00:28:32.967 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:32.967 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.225 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.484 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:33.484 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:33.484 true 00:28:33.742 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:33.742 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.742 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.000 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:34.000 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:34.259 true 00:28:34.259 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:34.259 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.518 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.776 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:28:34.776 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:28:35.034 true 00:28:35.034 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:35.034 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.034 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.292 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:28:35.292 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:28:35.551 true 00:28:35.551 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:35.551 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.809 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:36.067 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:28:36.067 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:28:36.067 true 00:28:36.067 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:36.067 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.325 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:36.583 17:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:28:36.584 17:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:28:36.842 true 00:28:36.842 17:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:36.842 17:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.100 17:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.359 17:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:28:37.359 17:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:28:37.359 true 00:28:37.359 17:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:37.359 17:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.617 17:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.875 17:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:28:37.875 17:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:28:38.151 true 00:28:38.151 17:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:38.151 17:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.416 17:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.674 17:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:28:38.674 17:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:28:38.674 true 00:28:38.674 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:38.674 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.932 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.190 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:28:39.190 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:28:39.449 true 00:28:39.449 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:39.449 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.708 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.708 17:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:28:39.708 17:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:28:39.967 true 00:28:39.967 17:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:39.967 17:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.225 17:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:40.484 17:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:28:40.484 17:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:28:40.743 true 00:28:40.743 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:40.743 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.743 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.002 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:28:41.002 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:28:41.261 true 00:28:41.261 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:41.261 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.520 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.778 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:28:41.778 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:28:41.778 true 00:28:41.778 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:41.778 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.037 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:42.295 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:28:42.295 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:28:42.554 true 00:28:42.554 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:42.554 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.812 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.071 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:28:43.071 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:28:43.071 true 00:28:43.071 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:43.071 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.330 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.588 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:28:43.588 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:28:43.847 true 00:28:43.847 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:43.847 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:44.106 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.364 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:28:44.364 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:28:44.364 true 00:28:44.623 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:44.623 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:44.623 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.882 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:28:44.882 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:28:45.140 true 00:28:45.140 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:45.140 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.399 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.658 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:28:45.658 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:28:45.658 true 00:28:45.658 Initializing NVMe Controllers 00:28:45.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.658 Controller IO queue size 128, less than required. 00:28:45.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:45.658 Initialization complete. Launching workers. 00:28:45.658 ======================================================== 00:28:45.658 Latency(us) 00:28:45.658 Device Information : IOPS MiB/s Average min max 00:28:45.658 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 28231.77 13.79 4533.65 1578.10 8276.22 00:28:45.658 ======================================================== 00:28:45.658 Total : 28231.77 13.79 4533.65 1578.10 8276.22 00:28:45.658 00:28:45.916 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2070691 00:28:45.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2070691) - No such process 00:28:45.916 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2070691 00:28:45.916 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.916 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:46.175 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:46.175 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:46.175 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:46.175 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.175 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:46.434 null0 00:28:46.434 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.434 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.434 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:46.434 null1 00:28:46.694 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.694 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.694 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:46.694 null2 00:28:46.694 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.694 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.694 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:46.952 null3 00:28:46.952 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.952 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.952 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:47.211 null4 00:28:47.211 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.211 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.211 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:47.211 null5 00:28:47.469 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.470 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.470 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:47.470 null6 00:28:47.470 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.470 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.470 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:47.729 null7 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.729 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2076477 2076480 2076482 2076485 2076488 2076491 2076493 2076496 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.730 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:47.989 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.989 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:47.989 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:47.989 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:47.989 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:47.989 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:47.989 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:47.989 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:48.248 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.248 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:48.249 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.508 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:48.508 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.508 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.508 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:48.766 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.766 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:48.766 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:48.766 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:48.766 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:48.766 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:48.766 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:48.766 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:49.023 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.023 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.023 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:49.023 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:49.024 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.282 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.541 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:49.542 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:49.542 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:49.542 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:49.542 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.542 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:49.542 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:49.542 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:49.542 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.801 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:50.060 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:50.060 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:50.060 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:50.060 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:50.060 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:50.060 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:50.060 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:50.060 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.060 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.060 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.060 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:50.319 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:50.578 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.578 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:50.838 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:50.838 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:50.838 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:50.838 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:50.838 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:50.838 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.838 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:50.838 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:51.096 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.096 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.096 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:51.096 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.096 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.096 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:51.097 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:51.355 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:51.355 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:51.355 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:51.355 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.356 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:51.615 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:51.615 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:51.615 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:51.615 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:51.615 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:51.615 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:51.615 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.615 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:51.874 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.874 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.874 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.874 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.874 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.875 rmmod nvme_tcp 00:28:51.875 rmmod nvme_fabrics 00:28:51.875 rmmod nvme_keyring 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2070379 ']' 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2070379 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2070379 ']' 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2070379 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2070379 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2070379' 00:28:51.875 killing process with pid 2070379 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2070379 00:28:51.875 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2070379 00:28:52.134 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:52.134 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:52.134 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:52.134 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:52.134 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:52.134 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:52.134 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:52.134 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:52.134 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:52.134 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.134 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.134 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.670 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:54.670 00:28:54.670 real 0m46.829s 00:28:54.670 user 3m1.492s 00:28:54.670 sys 0m21.104s 00:28:54.670 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:54.670 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:54.670 ************************************ 00:28:54.670 END TEST nvmf_ns_hotplug_stress 00:28:54.671 ************************************ 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:54.671 ************************************ 00:28:54.671 START TEST nvmf_delete_subsystem 00:28:54.671 ************************************ 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:54.671 * Looking for test storage... 00:28:54.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:54.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.671 --rc genhtml_branch_coverage=1 00:28:54.671 --rc genhtml_function_coverage=1 00:28:54.671 --rc genhtml_legend=1 00:28:54.671 --rc geninfo_all_blocks=1 00:28:54.671 --rc geninfo_unexecuted_blocks=1 00:28:54.671 00:28:54.671 ' 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:54.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.671 --rc genhtml_branch_coverage=1 00:28:54.671 --rc genhtml_function_coverage=1 00:28:54.671 --rc genhtml_legend=1 00:28:54.671 --rc geninfo_all_blocks=1 00:28:54.671 --rc geninfo_unexecuted_blocks=1 00:28:54.671 00:28:54.671 ' 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:54.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.671 --rc genhtml_branch_coverage=1 00:28:54.671 --rc genhtml_function_coverage=1 00:28:54.671 --rc genhtml_legend=1 00:28:54.671 --rc geninfo_all_blocks=1 00:28:54.671 --rc geninfo_unexecuted_blocks=1 00:28:54.671 00:28:54.671 ' 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:54.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.671 --rc genhtml_branch_coverage=1 00:28:54.671 --rc genhtml_function_coverage=1 00:28:54.671 --rc genhtml_legend=1 00:28:54.671 --rc geninfo_all_blocks=1 00:28:54.671 --rc geninfo_unexecuted_blocks=1 00:28:54.671 00:28:54.671 ' 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.671 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:54.672 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.244 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:01.245 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:01.245 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:01.245 Found net devices under 0000:af:00.0: cvl_0_0 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:01.245 Found net devices under 0000:af:00.1: cvl_0_1 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:01.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:29:01.245 00:29:01.245 --- 10.0.0.2 ping statistics --- 00:29:01.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.245 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:29:01.245 00:29:01.245 --- 10.0.0.1 ping statistics --- 00:29:01.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.245 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2080712 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2080712 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2080712 ']' 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.245 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.246 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.246 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.246 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.246 [2024-12-09 17:39:26.861208] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:01.246 [2024-12-09 17:39:26.862094] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:29:01.246 [2024-12-09 17:39:26.862127] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.246 [2024-12-09 17:39:26.940055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:01.246 [2024-12-09 17:39:26.979306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.246 [2024-12-09 17:39:26.979345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.246 [2024-12-09 17:39:26.979352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.246 [2024-12-09 17:39:26.979358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.246 [2024-12-09 17:39:26.979363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.246 [2024-12-09 17:39:26.980419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.246 [2024-12-09 17:39:26.980421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.246 [2024-12-09 17:39:27.048172] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:01.246 [2024-12-09 17:39:27.048704] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:01.246 [2024-12-09 17:39:27.048868] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.246 [2024-12-09 17:39:27.117217] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.246 [2024-12-09 17:39:27.149532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.246 NULL1 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.246 Delay0 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2080928 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:01.246 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:01.246 [2024-12-09 17:39:27.260336] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:02.727 17:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:02.727 17:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.727 17:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 [2024-12-09 17:39:29.434647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092780 is same with the state(6) to be set 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Read completed with error (sct=0, sc=8) 00:29:02.986 starting I/O failed: -6 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.986 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 starting I/O failed: -6 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 starting I/O failed: -6 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 starting I/O failed: -6 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 starting I/O failed: -6 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 starting I/O failed: -6 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 starting I/O failed: -6 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 starting I/O failed: -6 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 starting I/O failed: -6 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 [2024-12-09 17:39:29.435727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc554000c40 is same with the state(6) to be set 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Write completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:02.987 Read completed with error (sct=0, sc=8) 00:29:03.924 [2024-12-09 17:39:30.398140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20939b0 is same with the state(6) to be set 00:29:03.924 Read completed with error (sct=0, sc=8) 00:29:03.924 Read completed with error (sct=0, sc=8) 00:29:03.924 Write completed with error (sct=0, sc=8) 00:29:03.924 Read completed with error (sct=0, sc=8) 00:29:03.924 Read completed with error (sct=0, sc=8) 00:29:03.924 Read completed with error (sct=0, sc=8) 00:29:03.924 Read completed with error (sct=0, sc=8) 00:29:03.924 Write completed with error (sct=0, sc=8) 00:29:03.924 Read completed with error (sct=0, sc=8) 00:29:03.924 Read completed with error (sct=0, sc=8) 00:29:03.924 Read completed with error (sct=0, sc=8) 00:29:03.924 Read completed with error (sct=0, sc=8) 00:29:03.924 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 [2024-12-09 17:39:30.438359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20922c0 is same with the state(6) to be set 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 [2024-12-09 17:39:30.438514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092960 is same with the state(6) to be set 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 [2024-12-09 17:39:30.438669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc55400d7c0 is same with the state(6) to be set 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Write completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 Read completed with error (sct=0, sc=8) 00:29:03.925 [2024-12-09 17:39:30.439549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc55400d020 is same with the state(6) to be set 00:29:03.925 Initializing NVMe Controllers 00:29:03.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.925 Controller IO queue size 128, less than required. 00:29:03.925 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:03.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:03.925 Initialization complete. Launching workers. 00:29:03.925 ======================================================== 00:29:03.925 Latency(us) 00:29:03.925 Device Information : IOPS MiB/s Average min max 00:29:03.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.86 0.08 937199.20 325.14 2002593.75 00:29:03.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.91 0.08 992921.59 243.01 2002933.71 00:29:03.925 ======================================================== 00:29:03.925 Total : 330.77 0.16 964474.72 243.01 2002933.71 00:29:03.925 00:29:03.925 [2024-12-09 17:39:30.440211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20939b0 (9): Bad file descriptor 00:29:03.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:03.925 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.925 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:03.925 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2080928 00:29:03.925 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2080928 00:29:04.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2080928) - No such process 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2080928 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2080928 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2080928 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.494 [2024-12-09 17:39:30.969433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2081397 00:29:04.494 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:04.495 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:04.495 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2081397 00:29:04.495 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:04.753 [2024-12-09 17:39:31.054065] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:05.012 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:05.012 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2081397 00:29:05.012 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:05.579 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:05.579 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2081397 00:29:05.579 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:06.146 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:06.146 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2081397 00:29:06.146 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:06.713 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:06.713 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2081397 00:29:06.713 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:06.971 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:06.971 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2081397 00:29:06.971 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:07.539 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:07.539 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2081397 00:29:07.539 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:07.798 Initializing NVMe Controllers 00:29:07.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.798 Controller IO queue size 128, less than required. 00:29:07.798 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:07.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:07.798 Initialization complete. Launching workers. 00:29:07.798 ======================================================== 00:29:07.798 Latency(us) 00:29:07.798 Device Information : IOPS MiB/s Average min max 00:29:07.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002420.25 1000128.12 1008322.22 00:29:07.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004094.56 1000150.80 1042054.43 00:29:07.798 ======================================================== 00:29:07.798 Total : 256.00 0.12 1003257.40 1000128.12 1042054.43 00:29:07.798 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2081397 00:29:08.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2081397) - No such process 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2081397 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.057 rmmod nvme_tcp 00:29:08.057 rmmod nvme_fabrics 00:29:08.057 rmmod nvme_keyring 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2080712 ']' 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2080712 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2080712 ']' 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2080712 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.057 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2080712 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2080712' 00:29:08.316 killing process with pid 2080712 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2080712 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2080712 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.316 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.852 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:10.852 00:29:10.852 real 0m16.135s 00:29:10.852 user 0m26.209s 00:29:10.852 sys 0m6.076s 00:29:10.852 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.852 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.852 ************************************ 00:29:10.852 END TEST nvmf_delete_subsystem 00:29:10.852 ************************************ 00:29:10.852 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:10.852 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:10.852 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.852 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:10.852 ************************************ 00:29:10.852 START TEST nvmf_host_management 00:29:10.852 ************************************ 00:29:10.852 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:10.852 * Looking for test storage... 00:29:10.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:10.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.852 --rc genhtml_branch_coverage=1 00:29:10.852 --rc genhtml_function_coverage=1 00:29:10.852 --rc genhtml_legend=1 00:29:10.852 --rc geninfo_all_blocks=1 00:29:10.852 --rc geninfo_unexecuted_blocks=1 00:29:10.852 00:29:10.852 ' 00:29:10.852 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:10.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.852 --rc genhtml_branch_coverage=1 00:29:10.852 --rc genhtml_function_coverage=1 00:29:10.853 --rc genhtml_legend=1 00:29:10.853 --rc geninfo_all_blocks=1 00:29:10.853 --rc geninfo_unexecuted_blocks=1 00:29:10.853 00:29:10.853 ' 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:10.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.853 --rc genhtml_branch_coverage=1 00:29:10.853 --rc genhtml_function_coverage=1 00:29:10.853 --rc genhtml_legend=1 00:29:10.853 --rc geninfo_all_blocks=1 00:29:10.853 --rc geninfo_unexecuted_blocks=1 00:29:10.853 00:29:10.853 ' 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:10.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.853 --rc genhtml_branch_coverage=1 00:29:10.853 --rc genhtml_function_coverage=1 00:29:10.853 --rc genhtml_legend=1 00:29:10.853 --rc geninfo_all_blocks=1 00:29:10.853 --rc geninfo_unexecuted_blocks=1 00:29:10.853 00:29:10.853 ' 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:10.853 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:17.423 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:17.423 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:17.423 Found net devices under 0000:af:00.0: cvl_0_0 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.423 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:17.424 Found net devices under 0000:af:00.1: cvl_0_1 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:17.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:29:17.424 00:29:17.424 --- 10.0.0.2 ping statistics --- 00:29:17.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.424 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:29:17.424 00:29:17.424 --- 10.0.0.1 ping statistics --- 00:29:17.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.424 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:17.424 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2085524 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2085524 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2085524 ']' 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.424 [2024-12-09 17:39:43.074717] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:17.424 [2024-12-09 17:39:43.075675] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:29:17.424 [2024-12-09 17:39:43.075713] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.424 [2024-12-09 17:39:43.155434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.424 [2024-12-09 17:39:43.195266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.424 [2024-12-09 17:39:43.195301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.424 [2024-12-09 17:39:43.195307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.424 [2024-12-09 17:39:43.195313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.424 [2024-12-09 17:39:43.195317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.424 [2024-12-09 17:39:43.196717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.424 [2024-12-09 17:39:43.196825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.424 [2024-12-09 17:39:43.196935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.424 [2024-12-09 17:39:43.196936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:17.424 [2024-12-09 17:39:43.264764] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:17.424 [2024-12-09 17:39:43.265685] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:17.424 [2024-12-09 17:39:43.265780] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:17.424 [2024-12-09 17:39:43.265971] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:17.424 [2024-12-09 17:39:43.266026] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.424 [2024-12-09 17:39:43.929599] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.424 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.683 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:17.683 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:17.683 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:17.683 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.683 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.683 Malloc0 00:29:17.683 [2024-12-09 17:39:44.017847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2085687 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2085687 /var/tmp/bdevperf.sock 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2085687 ']' 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:17.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.683 { 00:29:17.683 "params": { 00:29:17.683 "name": "Nvme$subsystem", 00:29:17.683 "trtype": "$TEST_TRANSPORT", 00:29:17.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.683 "adrfam": "ipv4", 00:29:17.683 "trsvcid": "$NVMF_PORT", 00:29:17.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.683 "hdgst": ${hdgst:-false}, 00:29:17.683 "ddgst": ${ddgst:-false} 00:29:17.683 }, 00:29:17.683 "method": "bdev_nvme_attach_controller" 00:29:17.683 } 00:29:17.683 EOF 00:29:17.683 )") 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:17.683 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:17.683 "params": { 00:29:17.683 "name": "Nvme0", 00:29:17.683 "trtype": "tcp", 00:29:17.683 "traddr": "10.0.0.2", 00:29:17.683 "adrfam": "ipv4", 00:29:17.683 "trsvcid": "4420", 00:29:17.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.683 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:17.683 "hdgst": false, 00:29:17.683 "ddgst": false 00:29:17.683 }, 00:29:17.683 "method": "bdev_nvme_attach_controller" 00:29:17.683 }' 00:29:17.683 [2024-12-09 17:39:44.111230] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:29:17.683 [2024-12-09 17:39:44.111280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085687 ] 00:29:17.683 [2024-12-09 17:39:44.186830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.943 [2024-12-09 17:39:44.226739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.943 Running I/O for 10 seconds... 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.943 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.201 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.201 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:29:18.201 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:29:18.201 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.461 [2024-12-09 17:39:44.813333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c830 is same with the state(6) to be set 00:29:18.461 [2024-12-09 17:39:44.813377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c830 is same with the state(6) to be set 00:29:18.461 [2024-12-09 17:39:44.813385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c830 is same with the state(6) to be set 00:29:18.461 [2024-12-09 17:39:44.813392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c830 is same with the state(6) to be set 00:29:18.461 [2024-12-09 17:39:44.813398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c830 is same with the state(6) to be set 00:29:18.461 [2024-12-09 17:39:44.813404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c830 is same with the state(6) to be set 00:29:18.461 [2024-12-09 17:39:44.813410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c830 is same with the state(6) to be set 00:29:18.461 [2024-12-09 17:39:44.813416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c830 is same with the state(6) to be set 00:29:18.461 [2024-12-09 17:39:44.813422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c830 is same with the state(6) to be set 00:29:18.461 [2024-12-09 17:39:44.813428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c830 is same with the state(6) to be set 00:29:18.461 [2024-12-09 17:39:44.813434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c830 is same with the state(6) to be set 00:29:18.461 [2024-12-09 17:39:44.813440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c830 is same with the state(6) to be set 00:29:18.461 [2024-12-09 17:39:44.813612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.461 [2024-12-09 17:39:44.813647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.461 [2024-12-09 17:39:44.813657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.461 [2024-12-09 17:39:44.813663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.461 [2024-12-09 17:39:44.813677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.461 [2024-12-09 17:39:44.813684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.461 [2024-12-09 17:39:44.813691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.461 [2024-12-09 17:39:44.813698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.461 [2024-12-09 17:39:44.813705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x564760 is same with the state(6) to be set 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.461 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.462 [2024-12-09 17:39:44.823221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.462 [2024-12-09 17:39:44.823826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.462 [2024-12-09 17:39:44.823832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.823840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.823846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.823854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.823861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.823869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.823875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.823883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.823889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.823897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.823903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.823911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.823918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.823927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.823934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.823941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.823948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.823956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.823962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.823971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.823977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.823985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.823991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.823999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.463 [2024-12-09 17:39:44.824184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.463 [2024-12-09 17:39:44.824268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x564760 (9): Bad file descriptor 00:29:18.463 [2024-12-09 17:39:44.825121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:18.463 task offset: 98304 on job bdev=Nvme0n1 fails 00:29:18.463 00:29:18.463 Latency(us) 00:29:18.463 [2024-12-09T16:39:45.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.463 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.463 Job: Nvme0n1 ended in about 0.40 seconds with error 00:29:18.463 Verification LBA range: start 0x0 length 0x400 00:29:18.463 Nvme0n1 : 0.40 1910.50 119.41 159.21 0.00 30105.83 1396.54 26838.55 00:29:18.463 [2024-12-09T16:39:45.003Z] =================================================================================================================== 00:29:18.463 [2024-12-09T16:39:45.003Z] Total : 1910.50 119.41 159.21 0.00 30105.83 1396.54 26838.55 00:29:18.463 [2024-12-09 17:39:44.827455] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:18.463 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.463 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:18.463 [2024-12-09 17:39:44.879541] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2085687 00:29:19.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2085687) - No such process 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.401 { 00:29:19.401 "params": { 00:29:19.401 "name": "Nvme$subsystem", 00:29:19.401 "trtype": "$TEST_TRANSPORT", 00:29:19.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.401 "adrfam": "ipv4", 00:29:19.401 "trsvcid": "$NVMF_PORT", 00:29:19.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.401 "hdgst": ${hdgst:-false}, 00:29:19.401 "ddgst": ${ddgst:-false} 00:29:19.401 }, 00:29:19.401 "method": "bdev_nvme_attach_controller" 00:29:19.401 } 00:29:19.401 EOF 00:29:19.401 )") 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:19.401 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:19.401 "params": { 00:29:19.401 "name": "Nvme0", 00:29:19.401 "trtype": "tcp", 00:29:19.401 "traddr": "10.0.0.2", 00:29:19.401 "adrfam": "ipv4", 00:29:19.401 "trsvcid": "4420", 00:29:19.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.401 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:19.401 "hdgst": false, 00:29:19.401 "ddgst": false 00:29:19.401 }, 00:29:19.401 "method": "bdev_nvme_attach_controller" 00:29:19.401 }' 00:29:19.401 [2024-12-09 17:39:45.887035] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:29:19.401 [2024-12-09 17:39:45.887082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086033 ] 00:29:19.660 [2024-12-09 17:39:45.959279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.660 [2024-12-09 17:39:45.996835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.919 Running I/O for 1 seconds... 00:29:20.856 1984.00 IOPS, 124.00 MiB/s 00:29:20.856 Latency(us) 00:29:20.856 [2024-12-09T16:39:47.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.856 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.856 Verification LBA range: start 0x0 length 0x400 00:29:20.856 Nvme0n1 : 1.01 2023.73 126.48 0.00 0.00 31135.47 7177.75 27213.04 00:29:20.856 [2024-12-09T16:39:47.396Z] =================================================================================================================== 00:29:20.856 [2024-12-09T16:39:47.396Z] Total : 2023.73 126.48 0.00 0.00 31135.47 7177.75 27213.04 00:29:21.115 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:21.115 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:21.115 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:21.115 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:21.115 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:21.115 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.115 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:21.115 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.115 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.116 rmmod nvme_tcp 00:29:21.116 rmmod nvme_fabrics 00:29:21.116 rmmod nvme_keyring 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2085524 ']' 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2085524 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2085524 ']' 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2085524 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2085524 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2085524' 00:29:21.116 killing process with pid 2085524 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2085524 00:29:21.116 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2085524 00:29:21.375 [2024-12-09 17:39:47.778452] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:21.375 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.375 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.375 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.375 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:21.375 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:21.375 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.375 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.375 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.375 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.375 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.375 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.375 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.911 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.911 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:23.911 00:29:23.911 real 0m12.934s 00:29:23.911 user 0m18.273s 00:29:23.911 sys 0m6.314s 00:29:23.911 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.911 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:23.911 ************************************ 00:29:23.911 END TEST nvmf_host_management 00:29:23.911 ************************************ 00:29:23.911 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:23.911 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:23.911 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.911 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:23.911 ************************************ 00:29:23.911 START TEST nvmf_lvol 00:29:23.911 ************************************ 00:29:23.911 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:23.911 * Looking for test storage... 00:29:23.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:23.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.911 --rc genhtml_branch_coverage=1 00:29:23.911 --rc genhtml_function_coverage=1 00:29:23.911 --rc genhtml_legend=1 00:29:23.911 --rc geninfo_all_blocks=1 00:29:23.911 --rc geninfo_unexecuted_blocks=1 00:29:23.911 00:29:23.911 ' 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:23.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.911 --rc genhtml_branch_coverage=1 00:29:23.911 --rc genhtml_function_coverage=1 00:29:23.911 --rc genhtml_legend=1 00:29:23.911 --rc geninfo_all_blocks=1 00:29:23.911 --rc geninfo_unexecuted_blocks=1 00:29:23.911 00:29:23.911 ' 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:23.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.911 --rc genhtml_branch_coverage=1 00:29:23.911 --rc genhtml_function_coverage=1 00:29:23.911 --rc genhtml_legend=1 00:29:23.911 --rc geninfo_all_blocks=1 00:29:23.911 --rc geninfo_unexecuted_blocks=1 00:29:23.911 00:29:23.911 ' 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:23.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.911 --rc genhtml_branch_coverage=1 00:29:23.911 --rc genhtml_function_coverage=1 00:29:23.911 --rc genhtml_legend=1 00:29:23.911 --rc geninfo_all_blocks=1 00:29:23.911 --rc geninfo_unexecuted_blocks=1 00:29:23.911 00:29:23.911 ' 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.911 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:23.912 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:29.188 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:29.188 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:29.188 Found net devices under 0000:af:00.0: cvl_0_0 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:29.188 Found net devices under 0000:af:00.1: cvl_0_1 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:29.188 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.447 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.447 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:29.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:29:29.448 00:29:29.448 --- 10.0.0.2 ping statistics --- 00:29:29.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.448 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:29:29.448 00:29:29.448 --- 10.0.0.1 ping statistics --- 00:29:29.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.448 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:29.448 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2089727 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2089727 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2089727 ']' 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.707 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:29.707 [2024-12-09 17:39:56.073309] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:29.707 [2024-12-09 17:39:56.074220] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:29:29.707 [2024-12-09 17:39:56.074256] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.707 [2024-12-09 17:39:56.151054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:29.707 [2024-12-09 17:39:56.189881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.707 [2024-12-09 17:39:56.189918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.707 [2024-12-09 17:39:56.189925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.707 [2024-12-09 17:39:56.189931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.707 [2024-12-09 17:39:56.189936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.707 [2024-12-09 17:39:56.191214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.707 [2024-12-09 17:39:56.191321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.707 [2024-12-09 17:39:56.191323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:29.966 [2024-12-09 17:39:56.259175] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:29.966 [2024-12-09 17:39:56.259952] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:29.966 [2024-12-09 17:39:56.260001] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:29.966 [2024-12-09 17:39:56.260187] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:29.966 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:29.966 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:29.966 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:29.966 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:29.966 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:29.966 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.966 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:30.225 [2024-12-09 17:39:56.508079] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.225 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:30.225 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:30.225 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:30.484 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:30.484 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:30.743 17:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:31.002 17:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fa093bcd-05d7-4623-b44b-19e0449a3bc7 00:29:31.002 17:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fa093bcd-05d7-4623-b44b-19e0449a3bc7 lvol 20 00:29:31.261 17:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=43028287-6752-4b1f-9d73-b065789dde21 00:29:31.261 17:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:31.261 17:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 43028287-6752-4b1f-9d73-b065789dde21 00:29:31.519 17:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:31.778 [2024-12-09 17:39:58.107996] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.778 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:32.036 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2090145 00:29:32.036 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:32.036 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:32.970 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 43028287-6752-4b1f-9d73-b065789dde21 MY_SNAPSHOT 00:29:33.228 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=928ae6d5-467f-4c86-b97d-3d611049dd70 00:29:33.228 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 43028287-6752-4b1f-9d73-b065789dde21 30 00:29:33.486 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 928ae6d5-467f-4c86-b97d-3d611049dd70 MY_CLONE 00:29:33.744 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cff7a9d5-e5d8-4a25-b34c-cdbe5bb9ce4e 00:29:33.744 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate cff7a9d5-e5d8-4a25-b34c-cdbe5bb9ce4e 00:29:34.310 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2090145 00:29:42.418 Initializing NVMe Controllers 00:29:42.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:42.418 Controller IO queue size 128, less than required. 00:29:42.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:42.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:42.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:42.418 Initialization complete. Launching workers. 00:29:42.418 ======================================================== 00:29:42.418 Latency(us) 00:29:42.419 Device Information : IOPS MiB/s Average min max 00:29:42.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12523.20 48.92 10223.03 3387.95 57997.36 00:29:42.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12625.10 49.32 10143.08 1944.00 77660.95 00:29:42.419 ======================================================== 00:29:42.419 Total : 25148.30 98.24 10182.89 1944.00 77660.95 00:29:42.419 00:29:42.419 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:42.419 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 43028287-6752-4b1f-9d73-b065789dde21 00:29:42.677 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa093bcd-05d7-4623-b44b-19e0449a3bc7 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.937 rmmod nvme_tcp 00:29:42.937 rmmod nvme_fabrics 00:29:42.937 rmmod nvme_keyring 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2089727 ']' 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2089727 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2089727 ']' 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2089727 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2089727 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2089727' 00:29:42.937 killing process with pid 2089727 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2089727 00:29:42.937 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2089727 00:29:43.196 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:43.196 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:43.196 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:43.196 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:43.196 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:43.196 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:43.196 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:43.196 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:43.197 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:43.197 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.197 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.197 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.732 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:45.732 00:29:45.732 real 0m21.783s 00:29:45.732 user 0m55.601s 00:29:45.732 sys 0m9.753s 00:29:45.732 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.732 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:45.732 ************************************ 00:29:45.733 END TEST nvmf_lvol 00:29:45.733 ************************************ 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:45.733 ************************************ 00:29:45.733 START TEST nvmf_lvs_grow 00:29:45.733 ************************************ 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:45.733 * Looking for test storage... 00:29:45.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:45.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.733 --rc genhtml_branch_coverage=1 00:29:45.733 --rc genhtml_function_coverage=1 00:29:45.733 --rc genhtml_legend=1 00:29:45.733 --rc geninfo_all_blocks=1 00:29:45.733 --rc geninfo_unexecuted_blocks=1 00:29:45.733 00:29:45.733 ' 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:45.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.733 --rc genhtml_branch_coverage=1 00:29:45.733 --rc genhtml_function_coverage=1 00:29:45.733 --rc genhtml_legend=1 00:29:45.733 --rc geninfo_all_blocks=1 00:29:45.733 --rc geninfo_unexecuted_blocks=1 00:29:45.733 00:29:45.733 ' 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:45.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.733 --rc genhtml_branch_coverage=1 00:29:45.733 --rc genhtml_function_coverage=1 00:29:45.733 --rc genhtml_legend=1 00:29:45.733 --rc geninfo_all_blocks=1 00:29:45.733 --rc geninfo_unexecuted_blocks=1 00:29:45.733 00:29:45.733 ' 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:45.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.733 --rc genhtml_branch_coverage=1 00:29:45.733 --rc genhtml_function_coverage=1 00:29:45.733 --rc genhtml_legend=1 00:29:45.733 --rc geninfo_all_blocks=1 00:29:45.733 --rc geninfo_unexecuted_blocks=1 00:29:45.733 00:29:45.733 ' 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.733 17:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.733 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.733 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.733 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.733 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.733 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.733 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.734 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:52.304 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:52.304 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:52.304 Found net devices under 0000:af:00.0: cvl_0_0 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:52.304 Found net devices under 0000:af:00.1: cvl_0_1 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.304 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:29:52.305 00:29:52.305 --- 10.0.0.2 ping statistics --- 00:29:52.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.305 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:29:52.305 00:29:52.305 --- 10.0.0.1 ping statistics --- 00:29:52.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.305 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2095237 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2095237 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2095237 ']' 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.305 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:52.305 [2024-12-09 17:40:17.967524] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:52.305 [2024-12-09 17:40:17.968527] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:29:52.305 [2024-12-09 17:40:17.968568] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.305 [2024-12-09 17:40:18.048217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.305 [2024-12-09 17:40:18.087535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.305 [2024-12-09 17:40:18.087569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.305 [2024-12-09 17:40:18.087576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.305 [2024-12-09 17:40:18.087582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.305 [2024-12-09 17:40:18.087587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.305 [2024-12-09 17:40:18.088048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.305 [2024-12-09 17:40:18.154789] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:52.305 [2024-12-09 17:40:18.155008] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:52.305 [2024-12-09 17:40:18.392729] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:52.305 ************************************ 00:29:52.305 START TEST lvs_grow_clean 00:29:52.305 ************************************ 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:52.305 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:52.564 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7ae1c497-0914-4097-adec-64e1b9876bd1 00:29:52.564 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ae1c497-0914-4097-adec-64e1b9876bd1 00:29:52.564 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:52.564 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:52.564 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:52.564 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7ae1c497-0914-4097-adec-64e1b9876bd1 lvol 150 00:29:52.823 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fcbac227-e194-4594-8ab6-b2a0c5fe1c43 00:29:52.823 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:52.823 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:53.083 [2024-12-09 17:40:19.440463] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:53.083 [2024-12-09 17:40:19.440589] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:53.083 true 00:29:53.083 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ae1c497-0914-4097-adec-64e1b9876bd1 00:29:53.083 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:53.342 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:53.342 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:53.342 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fcbac227-e194-4594-8ab6-b2a0c5fe1c43 00:29:53.601 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:53.860 [2024-12-09 17:40:20.224896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.860 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:54.119 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2095728 00:29:54.119 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.119 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:54.119 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2095728 /var/tmp/bdevperf.sock 00:29:54.119 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2095728 ']' 00:29:54.119 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:54.119 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.119 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:54.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:54.119 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.119 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:54.119 [2024-12-09 17:40:20.492717] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:29:54.119 [2024-12-09 17:40:20.492765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095728 ] 00:29:54.119 [2024-12-09 17:40:20.566231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.119 [2024-12-09 17:40:20.607793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.377 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.377 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:54.377 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:54.636 Nvme0n1 00:29:54.636 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:54.894 [ 00:29:54.894 { 00:29:54.894 "name": "Nvme0n1", 00:29:54.894 "aliases": [ 00:29:54.894 "fcbac227-e194-4594-8ab6-b2a0c5fe1c43" 00:29:54.894 ], 00:29:54.894 "product_name": "NVMe disk", 00:29:54.894 "block_size": 4096, 00:29:54.894 "num_blocks": 38912, 00:29:54.894 "uuid": "fcbac227-e194-4594-8ab6-b2a0c5fe1c43", 00:29:54.894 "numa_id": 1, 00:29:54.894 "assigned_rate_limits": { 00:29:54.894 "rw_ios_per_sec": 0, 00:29:54.894 "rw_mbytes_per_sec": 0, 00:29:54.894 "r_mbytes_per_sec": 0, 00:29:54.894 "w_mbytes_per_sec": 0 00:29:54.894 }, 00:29:54.894 "claimed": false, 00:29:54.894 "zoned": false, 00:29:54.894 "supported_io_types": { 00:29:54.894 "read": true, 00:29:54.895 "write": true, 00:29:54.895 "unmap": true, 00:29:54.895 "flush": true, 00:29:54.895 "reset": true, 00:29:54.895 "nvme_admin": true, 00:29:54.895 "nvme_io": true, 00:29:54.895 "nvme_io_md": false, 00:29:54.895 "write_zeroes": true, 00:29:54.895 "zcopy": false, 00:29:54.895 "get_zone_info": false, 00:29:54.895 "zone_management": false, 00:29:54.895 "zone_append": false, 00:29:54.895 "compare": true, 00:29:54.895 "compare_and_write": true, 00:29:54.895 "abort": true, 00:29:54.895 "seek_hole": false, 00:29:54.895 "seek_data": false, 00:29:54.895 "copy": true, 00:29:54.895 "nvme_iov_md": false 00:29:54.895 }, 00:29:54.895 "memory_domains": [ 00:29:54.895 { 00:29:54.895 "dma_device_id": "system", 00:29:54.895 "dma_device_type": 1 00:29:54.895 } 00:29:54.895 ], 00:29:54.895 "driver_specific": { 00:29:54.895 "nvme": [ 00:29:54.895 { 00:29:54.895 "trid": { 00:29:54.895 "trtype": "TCP", 00:29:54.895 "adrfam": "IPv4", 00:29:54.895 "traddr": "10.0.0.2", 00:29:54.895 "trsvcid": "4420", 00:29:54.895 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:54.895 }, 00:29:54.895 "ctrlr_data": { 00:29:54.895 "cntlid": 1, 00:29:54.895 "vendor_id": "0x8086", 00:29:54.895 "model_number": "SPDK bdev Controller", 00:29:54.895 "serial_number": "SPDK0", 00:29:54.895 "firmware_revision": "25.01", 00:29:54.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:54.895 "oacs": { 00:29:54.895 "security": 0, 00:29:54.895 "format": 0, 00:29:54.895 "firmware": 0, 00:29:54.895 "ns_manage": 0 00:29:54.895 }, 00:29:54.895 "multi_ctrlr": true, 00:29:54.895 "ana_reporting": false 00:29:54.895 }, 00:29:54.895 "vs": { 00:29:54.895 "nvme_version": "1.3" 00:29:54.895 }, 00:29:54.895 "ns_data": { 00:29:54.895 "id": 1, 00:29:54.895 "can_share": true 00:29:54.895 } 00:29:54.895 } 00:29:54.895 ], 00:29:54.895 "mp_policy": "active_passive" 00:29:54.895 } 00:29:54.895 } 00:29:54.895 ] 00:29:54.895 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2095941 00:29:54.895 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:54.895 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:54.895 Running I/O for 10 seconds... 00:29:56.274 Latency(us) 00:29:56.274 [2024-12-09T16:40:22.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:56.274 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:56.274 [2024-12-09T16:40:22.814Z] =================================================================================================================== 00:29:56.274 [2024-12-09T16:40:22.814Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:56.274 00:29:56.968 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7ae1c497-0914-4097-adec-64e1b9876bd1 00:29:56.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:56.968 Nvme0n1 : 2.00 23209.50 90.66 0.00 0.00 0.00 0.00 0.00 00:29:56.968 [2024-12-09T16:40:23.508Z] =================================================================================================================== 00:29:56.968 [2024-12-09T16:40:23.508Z] Total : 23209.50 90.66 0.00 0.00 0.00 0.00 0.00 00:29:56.968 00:29:56.968 true 00:29:57.230 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ae1c497-0914-4097-adec-64e1b9876bd1 00:29:57.230 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:57.230 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:57.230 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:57.230 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2095941 00:29:58.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.166 Nvme0n1 : 3.00 23294.67 90.99 0.00 0.00 0.00 0.00 0.00 00:29:58.166 [2024-12-09T16:40:24.706Z] =================================================================================================================== 00:29:58.166 [2024-12-09T16:40:24.706Z] Total : 23294.67 90.99 0.00 0.00 0.00 0.00 0.00 00:29:58.166 00:29:59.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.106 Nvme0n1 : 4.00 23408.25 91.44 0.00 0.00 0.00 0.00 0.00 00:29:59.106 [2024-12-09T16:40:25.646Z] =================================================================================================================== 00:29:59.106 [2024-12-09T16:40:25.646Z] Total : 23408.25 91.44 0.00 0.00 0.00 0.00 0.00 00:29:59.106 00:30:00.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.042 Nvme0n1 : 5.00 23463.80 91.66 0.00 0.00 0.00 0.00 0.00 00:30:00.042 [2024-12-09T16:40:26.582Z] =================================================================================================================== 00:30:00.042 [2024-12-09T16:40:26.582Z] Total : 23463.80 91.66 0.00 0.00 0.00 0.00 0.00 00:30:00.042 00:30:00.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.978 Nvme0n1 : 6.00 23508.83 91.83 0.00 0.00 0.00 0.00 0.00 00:30:00.978 [2024-12-09T16:40:27.518Z] =================================================================================================================== 00:30:00.978 [2024-12-09T16:40:27.518Z] Total : 23508.83 91.83 0.00 0.00 0.00 0.00 0.00 00:30:00.978 00:30:01.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.914 Nvme0n1 : 7.00 23543.14 91.97 0.00 0.00 0.00 0.00 0.00 00:30:01.914 [2024-12-09T16:40:28.454Z] =================================================================================================================== 00:30:01.914 [2024-12-09T16:40:28.454Z] Total : 23543.14 91.97 0.00 0.00 0.00 0.00 0.00 00:30:01.914 00:30:02.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:02.871 Nvme0n1 : 8.00 23571.00 92.07 0.00 0.00 0.00 0.00 0.00 00:30:02.871 [2024-12-09T16:40:29.411Z] =================================================================================================================== 00:30:02.871 [2024-12-09T16:40:29.411Z] Total : 23571.00 92.07 0.00 0.00 0.00 0.00 0.00 00:30:02.871 00:30:04.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:04.248 Nvme0n1 : 9.00 23562.56 92.04 0.00 0.00 0.00 0.00 0.00 00:30:04.248 [2024-12-09T16:40:30.788Z] =================================================================================================================== 00:30:04.248 [2024-12-09T16:40:30.788Z] Total : 23562.56 92.04 0.00 0.00 0.00 0.00 0.00 00:30:04.248 00:30:05.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:05.184 Nvme0n1 : 10.00 23568.50 92.06 0.00 0.00 0.00 0.00 0.00 00:30:05.184 [2024-12-09T16:40:31.724Z] =================================================================================================================== 00:30:05.184 [2024-12-09T16:40:31.724Z] Total : 23568.50 92.06 0.00 0.00 0.00 0.00 0.00 00:30:05.184 00:30:05.184 00:30:05.184 Latency(us) 00:30:05.184 [2024-12-09T16:40:31.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:05.184 Nvme0n1 : 10.00 23569.70 92.07 0.00 0.00 5427.60 3120.76 25340.59 00:30:05.184 [2024-12-09T16:40:31.724Z] =================================================================================================================== 00:30:05.184 [2024-12-09T16:40:31.724Z] Total : 23569.70 92.07 0.00 0.00 5427.60 3120.76 25340.59 00:30:05.184 { 00:30:05.184 "results": [ 00:30:05.184 { 00:30:05.184 "job": "Nvme0n1", 00:30:05.184 "core_mask": "0x2", 00:30:05.184 "workload": "randwrite", 00:30:05.185 "status": "finished", 00:30:05.185 "queue_depth": 128, 00:30:05.185 "io_size": 4096, 00:30:05.185 "runtime": 10.004921, 00:30:05.185 "iops": 23569.701349965682, 00:30:05.185 "mibps": 92.06914589830345, 00:30:05.185 "io_failed": 0, 00:30:05.185 "io_timeout": 0, 00:30:05.185 "avg_latency_us": 5427.598109284738, 00:30:05.185 "min_latency_us": 3120.7619047619046, 00:30:05.185 "max_latency_us": 25340.586666666666 00:30:05.185 } 00:30:05.185 ], 00:30:05.185 "core_count": 1 00:30:05.185 } 00:30:05.185 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2095728 00:30:05.185 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2095728 ']' 00:30:05.185 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2095728 00:30:05.185 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:05.185 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.185 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2095728 00:30:05.185 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:05.185 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:05.185 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2095728' 00:30:05.185 killing process with pid 2095728 00:30:05.185 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2095728 00:30:05.185 Received shutdown signal, test time was about 10.000000 seconds 00:30:05.185 00:30:05.185 Latency(us) 00:30:05.185 [2024-12-09T16:40:31.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.185 [2024-12-09T16:40:31.725Z] =================================================================================================================== 00:30:05.185 [2024-12-09T16:40:31.725Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.185 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2095728 00:30:05.185 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.444 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:05.703 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ae1c497-0914-4097-adec-64e1b9876bd1 00:30:05.703 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:05.962 [2024-12-09 17:40:32.420507] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ae1c497-0914-4097-adec-64e1b9876bd1 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ae1c497-0914-4097-adec-64e1b9876bd1 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:05.962 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ae1c497-0914-4097-adec-64e1b9876bd1 00:30:06.221 request: 00:30:06.221 { 00:30:06.221 "uuid": "7ae1c497-0914-4097-adec-64e1b9876bd1", 00:30:06.221 "method": "bdev_lvol_get_lvstores", 00:30:06.221 "req_id": 1 00:30:06.221 } 00:30:06.221 Got JSON-RPC error response 00:30:06.221 response: 00:30:06.221 { 00:30:06.221 "code": -19, 00:30:06.221 "message": "No such device" 00:30:06.221 } 00:30:06.221 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:06.221 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:06.221 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:06.221 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:06.221 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:06.480 aio_bdev 00:30:06.480 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fcbac227-e194-4594-8ab6-b2a0c5fe1c43 00:30:06.480 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=fcbac227-e194-4594-8ab6-b2a0c5fe1c43 00:30:06.480 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:06.480 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:06.480 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:06.480 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:06.480 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:06.739 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fcbac227-e194-4594-8ab6-b2a0c5fe1c43 -t 2000 00:30:06.739 [ 00:30:06.739 { 00:30:06.739 "name": "fcbac227-e194-4594-8ab6-b2a0c5fe1c43", 00:30:06.739 "aliases": [ 00:30:06.739 "lvs/lvol" 00:30:06.739 ], 00:30:06.739 "product_name": "Logical Volume", 00:30:06.739 "block_size": 4096, 00:30:06.739 "num_blocks": 38912, 00:30:06.739 "uuid": "fcbac227-e194-4594-8ab6-b2a0c5fe1c43", 00:30:06.739 "assigned_rate_limits": { 00:30:06.739 "rw_ios_per_sec": 0, 00:30:06.739 "rw_mbytes_per_sec": 0, 00:30:06.739 "r_mbytes_per_sec": 0, 00:30:06.739 "w_mbytes_per_sec": 0 00:30:06.739 }, 00:30:06.739 "claimed": false, 00:30:06.739 "zoned": false, 00:30:06.739 "supported_io_types": { 00:30:06.739 "read": true, 00:30:06.739 "write": true, 00:30:06.739 "unmap": true, 00:30:06.739 "flush": false, 00:30:06.739 "reset": true, 00:30:06.739 "nvme_admin": false, 00:30:06.739 "nvme_io": false, 00:30:06.739 "nvme_io_md": false, 00:30:06.739 "write_zeroes": true, 00:30:06.739 "zcopy": false, 00:30:06.739 "get_zone_info": false, 00:30:06.739 "zone_management": false, 00:30:06.739 "zone_append": false, 00:30:06.739 "compare": false, 00:30:06.739 "compare_and_write": false, 00:30:06.739 "abort": false, 00:30:06.739 "seek_hole": true, 00:30:06.739 "seek_data": true, 00:30:06.739 "copy": false, 00:30:06.739 "nvme_iov_md": false 00:30:06.739 }, 00:30:06.739 "driver_specific": { 00:30:06.739 "lvol": { 00:30:06.739 "lvol_store_uuid": "7ae1c497-0914-4097-adec-64e1b9876bd1", 00:30:06.739 "base_bdev": "aio_bdev", 00:30:06.739 "thin_provision": false, 00:30:06.739 "num_allocated_clusters": 38, 00:30:06.739 "snapshot": false, 00:30:06.739 "clone": false, 00:30:06.739 "esnap_clone": false 00:30:06.739 } 00:30:06.739 } 00:30:06.739 } 00:30:06.739 ] 00:30:06.739 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:06.739 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ae1c497-0914-4097-adec-64e1b9876bd1 00:30:06.740 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:06.998 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:06.999 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ae1c497-0914-4097-adec-64e1b9876bd1 00:30:06.999 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:07.257 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:07.257 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fcbac227-e194-4594-8ab6-b2a0c5fe1c43 00:30:07.516 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7ae1c497-0914-4097-adec-64e1b9876bd1 00:30:07.516 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:07.775 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:07.775 00:30:07.775 real 0m15.781s 00:30:07.775 user 0m15.355s 00:30:07.775 sys 0m1.482s 00:30:07.775 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:07.775 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.775 ************************************ 00:30:07.775 END TEST lvs_grow_clean 00:30:07.775 ************************************ 00:30:07.775 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:07.775 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:07.775 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.775 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:08.034 ************************************ 00:30:08.034 START TEST lvs_grow_dirty 00:30:08.034 ************************************ 00:30:08.034 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:08.034 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:08.034 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:08.034 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:08.034 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:08.034 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:08.034 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:08.034 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:08.034 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:08.034 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:08.034 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:08.034 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:08.293 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:08.293 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:08.293 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:08.552 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:08.552 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:08.552 17:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e lvol 150 00:30:08.810 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a9bfc44e-2fb4-47ba-b506-a625b4878736 00:30:08.810 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:08.810 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:08.810 [2024-12-09 17:40:35.332437] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:08.810 [2024-12-09 17:40:35.332561] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:08.810 true 00:30:08.810 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:08.810 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:09.070 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:09.070 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:09.328 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a9bfc44e-2fb4-47ba-b506-a625b4878736 00:30:09.328 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:09.586 [2024-12-09 17:40:36.036849] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.586 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:09.845 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2098242 00:30:09.845 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:09.845 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:09.845 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2098242 /var/tmp/bdevperf.sock 00:30:09.845 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2098242 ']' 00:30:09.845 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:09.845 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.845 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:09.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:09.845 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.845 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:09.845 [2024-12-09 17:40:36.280445] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:30:09.845 [2024-12-09 17:40:36.280498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2098242 ] 00:30:09.845 [2024-12-09 17:40:36.354595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.104 [2024-12-09 17:40:36.395829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.104 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.104 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:10.104 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:10.362 Nvme0n1 00:30:10.362 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:10.621 [ 00:30:10.621 { 00:30:10.621 "name": "Nvme0n1", 00:30:10.621 "aliases": [ 00:30:10.621 "a9bfc44e-2fb4-47ba-b506-a625b4878736" 00:30:10.621 ], 00:30:10.621 "product_name": "NVMe disk", 00:30:10.621 "block_size": 4096, 00:30:10.621 "num_blocks": 38912, 00:30:10.621 "uuid": "a9bfc44e-2fb4-47ba-b506-a625b4878736", 00:30:10.621 "numa_id": 1, 00:30:10.621 "assigned_rate_limits": { 00:30:10.621 "rw_ios_per_sec": 0, 00:30:10.621 "rw_mbytes_per_sec": 0, 00:30:10.621 "r_mbytes_per_sec": 0, 00:30:10.621 "w_mbytes_per_sec": 0 00:30:10.621 }, 00:30:10.621 "claimed": false, 00:30:10.621 "zoned": false, 00:30:10.621 "supported_io_types": { 00:30:10.622 "read": true, 00:30:10.622 "write": true, 00:30:10.622 "unmap": true, 00:30:10.622 "flush": true, 00:30:10.622 "reset": true, 00:30:10.622 "nvme_admin": true, 00:30:10.622 "nvme_io": true, 00:30:10.622 "nvme_io_md": false, 00:30:10.622 "write_zeroes": true, 00:30:10.622 "zcopy": false, 00:30:10.622 "get_zone_info": false, 00:30:10.622 "zone_management": false, 00:30:10.622 "zone_append": false, 00:30:10.622 "compare": true, 00:30:10.622 "compare_and_write": true, 00:30:10.622 "abort": true, 00:30:10.622 "seek_hole": false, 00:30:10.622 "seek_data": false, 00:30:10.622 "copy": true, 00:30:10.622 "nvme_iov_md": false 00:30:10.622 }, 00:30:10.622 "memory_domains": [ 00:30:10.622 { 00:30:10.622 "dma_device_id": "system", 00:30:10.622 "dma_device_type": 1 00:30:10.622 } 00:30:10.622 ], 00:30:10.622 "driver_specific": { 00:30:10.622 "nvme": [ 00:30:10.622 { 00:30:10.622 "trid": { 00:30:10.622 "trtype": "TCP", 00:30:10.622 "adrfam": "IPv4", 00:30:10.622 "traddr": "10.0.0.2", 00:30:10.622 "trsvcid": "4420", 00:30:10.622 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:10.622 }, 00:30:10.622 "ctrlr_data": { 00:30:10.622 "cntlid": 1, 00:30:10.622 "vendor_id": "0x8086", 00:30:10.622 "model_number": "SPDK bdev Controller", 00:30:10.622 "serial_number": "SPDK0", 00:30:10.622 "firmware_revision": "25.01", 00:30:10.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.622 "oacs": { 00:30:10.622 "security": 0, 00:30:10.622 "format": 0, 00:30:10.622 "firmware": 0, 00:30:10.622 "ns_manage": 0 00:30:10.622 }, 00:30:10.622 "multi_ctrlr": true, 00:30:10.622 "ana_reporting": false 00:30:10.622 }, 00:30:10.622 "vs": { 00:30:10.622 "nvme_version": "1.3" 00:30:10.622 }, 00:30:10.622 "ns_data": { 00:30:10.622 "id": 1, 00:30:10.622 "can_share": true 00:30:10.622 } 00:30:10.622 } 00:30:10.622 ], 00:30:10.622 "mp_policy": "active_passive" 00:30:10.622 } 00:30:10.622 } 00:30:10.622 ] 00:30:10.622 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:10.622 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2098461 00:30:10.622 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:10.622 Running I/O for 10 seconds... 00:30:11.558 Latency(us) 00:30:11.558 [2024-12-09T16:40:38.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.558 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:11.558 [2024-12-09T16:40:38.098Z] =================================================================================================================== 00:30:11.558 [2024-12-09T16:40:38.098Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:11.558 00:30:12.494 17:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:12.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:12.494 Nvme0n1 : 2.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:12.494 [2024-12-09T16:40:39.034Z] =================================================================================================================== 00:30:12.494 [2024-12-09T16:40:39.034Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:12.494 00:30:12.753 true 00:30:12.753 17:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:12.753 17:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:13.012 17:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:13.012 17:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:13.012 17:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2098461 00:30:13.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.579 Nvme0n1 : 3.00 23325.67 91.12 0.00 0.00 0.00 0.00 0.00 00:30:13.579 [2024-12-09T16:40:40.119Z] =================================================================================================================== 00:30:13.579 [2024-12-09T16:40:40.119Z] Total : 23325.67 91.12 0.00 0.00 0.00 0.00 0.00 00:30:13.579 00:30:14.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:14.513 Nvme0n1 : 4.00 23336.25 91.16 0.00 0.00 0.00 0.00 0.00 00:30:14.513 [2024-12-09T16:40:41.053Z] =================================================================================================================== 00:30:14.513 [2024-12-09T16:40:41.053Z] Total : 23336.25 91.16 0.00 0.00 0.00 0.00 0.00 00:30:14.513 00:30:15.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.890 Nvme0n1 : 5.00 23418.80 91.48 0.00 0.00 0.00 0.00 0.00 00:30:15.890 [2024-12-09T16:40:42.430Z] =================================================================================================================== 00:30:15.890 [2024-12-09T16:40:42.430Z] Total : 23418.80 91.48 0.00 0.00 0.00 0.00 0.00 00:30:15.890 00:30:16.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:16.827 Nvme0n1 : 6.00 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:30:16.827 [2024-12-09T16:40:43.367Z] =================================================================================================================== 00:30:16.827 [2024-12-09T16:40:43.367Z] Total : 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:30:16.827 00:30:17.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:17.763 Nvme0n1 : 7.00 23549.43 91.99 0.00 0.00 0.00 0.00 0.00 00:30:17.763 [2024-12-09T16:40:44.303Z] =================================================================================================================== 00:30:17.763 [2024-12-09T16:40:44.303Z] Total : 23549.43 91.99 0.00 0.00 0.00 0.00 0.00 00:30:17.763 00:30:18.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.700 Nvme0n1 : 8.00 23590.25 92.15 0.00 0.00 0.00 0.00 0.00 00:30:18.700 [2024-12-09T16:40:45.240Z] =================================================================================================================== 00:30:18.700 [2024-12-09T16:40:45.240Z] Total : 23590.25 92.15 0.00 0.00 0.00 0.00 0.00 00:30:18.700 00:30:19.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.635 Nvme0n1 : 9.00 23622.00 92.27 0.00 0.00 0.00 0.00 0.00 00:30:19.635 [2024-12-09T16:40:46.175Z] =================================================================================================================== 00:30:19.635 [2024-12-09T16:40:46.175Z] Total : 23622.00 92.27 0.00 0.00 0.00 0.00 0.00 00:30:19.635 00:30:20.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.572 Nvme0n1 : 10.00 23647.40 92.37 0.00 0.00 0.00 0.00 0.00 00:30:20.572 [2024-12-09T16:40:47.112Z] =================================================================================================================== 00:30:20.572 [2024-12-09T16:40:47.112Z] Total : 23647.40 92.37 0.00 0.00 0.00 0.00 0.00 00:30:20.572 00:30:20.572 00:30:20.572 Latency(us) 00:30:20.572 [2024-12-09T16:40:47.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.572 Nvme0n1 : 10.00 23648.51 92.38 0.00 0.00 5409.82 4712.35 27337.87 00:30:20.572 [2024-12-09T16:40:47.112Z] =================================================================================================================== 00:30:20.572 [2024-12-09T16:40:47.112Z] Total : 23648.51 92.38 0.00 0.00 5409.82 4712.35 27337.87 00:30:20.572 { 00:30:20.572 "results": [ 00:30:20.572 { 00:30:20.572 "job": "Nvme0n1", 00:30:20.572 "core_mask": "0x2", 00:30:20.572 "workload": "randwrite", 00:30:20.572 "status": "finished", 00:30:20.572 "queue_depth": 128, 00:30:20.572 "io_size": 4096, 00:30:20.572 "runtime": 10.004944, 00:30:20.572 "iops": 23648.508177557014, 00:30:20.572 "mibps": 92.37698506858209, 00:30:20.572 "io_failed": 0, 00:30:20.572 "io_timeout": 0, 00:30:20.572 "avg_latency_us": 5409.815938471719, 00:30:20.572 "min_latency_us": 4712.350476190476, 00:30:20.572 "max_latency_us": 27337.874285714286 00:30:20.572 } 00:30:20.572 ], 00:30:20.572 "core_count": 1 00:30:20.572 } 00:30:20.572 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2098242 00:30:20.572 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2098242 ']' 00:30:20.572 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2098242 00:30:20.572 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:20.572 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.572 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2098242 00:30:20.831 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:20.831 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:20.831 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2098242' 00:30:20.831 killing process with pid 2098242 00:30:20.831 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2098242 00:30:20.831 Received shutdown signal, test time was about 10.000000 seconds 00:30:20.831 00:30:20.831 Latency(us) 00:30:20.831 [2024-12-09T16:40:47.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.831 [2024-12-09T16:40:47.371Z] =================================================================================================================== 00:30:20.831 [2024-12-09T16:40:47.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:20.831 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2098242 00:30:20.831 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:21.090 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:21.349 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:21.349 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:21.349 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:21.349 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:21.349 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2095237 00:30:21.349 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2095237 00:30:21.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2095237 Killed "${NVMF_APP[@]}" "$@" 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2100183 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2100183 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2100183 ']' 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.609 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:21.609 [2024-12-09 17:40:47.960604] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:21.609 [2024-12-09 17:40:47.961485] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:30:21.609 [2024-12-09 17:40:47.961522] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.609 [2024-12-09 17:40:48.039136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.609 [2024-12-09 17:40:48.077947] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.609 [2024-12-09 17:40:48.077983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.609 [2024-12-09 17:40:48.077991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.609 [2024-12-09 17:40:48.077998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.609 [2024-12-09 17:40:48.078003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.609 [2024-12-09 17:40:48.078479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.609 [2024-12-09 17:40:48.146050] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:21.609 [2024-12-09 17:40:48.146260] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:21.868 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.868 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:21.868 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:21.868 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:21.868 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:21.868 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.868 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:21.868 [2024-12-09 17:40:48.383859] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:21.868 [2024-12-09 17:40:48.384067] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:21.868 [2024-12-09 17:40:48.384153] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:22.127 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:22.127 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a9bfc44e-2fb4-47ba-b506-a625b4878736 00:30:22.127 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a9bfc44e-2fb4-47ba-b506-a625b4878736 00:30:22.127 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:22.127 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:22.127 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:22.127 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:22.127 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:22.127 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a9bfc44e-2fb4-47ba-b506-a625b4878736 -t 2000 00:30:22.386 [ 00:30:22.386 { 00:30:22.386 "name": "a9bfc44e-2fb4-47ba-b506-a625b4878736", 00:30:22.386 "aliases": [ 00:30:22.386 "lvs/lvol" 00:30:22.386 ], 00:30:22.386 "product_name": "Logical Volume", 00:30:22.386 "block_size": 4096, 00:30:22.386 "num_blocks": 38912, 00:30:22.386 "uuid": "a9bfc44e-2fb4-47ba-b506-a625b4878736", 00:30:22.386 "assigned_rate_limits": { 00:30:22.386 "rw_ios_per_sec": 0, 00:30:22.386 "rw_mbytes_per_sec": 0, 00:30:22.386 "r_mbytes_per_sec": 0, 00:30:22.386 "w_mbytes_per_sec": 0 00:30:22.386 }, 00:30:22.386 "claimed": false, 00:30:22.386 "zoned": false, 00:30:22.386 "supported_io_types": { 00:30:22.386 "read": true, 00:30:22.386 "write": true, 00:30:22.386 "unmap": true, 00:30:22.386 "flush": false, 00:30:22.386 "reset": true, 00:30:22.386 "nvme_admin": false, 00:30:22.386 "nvme_io": false, 00:30:22.386 "nvme_io_md": false, 00:30:22.386 "write_zeroes": true, 00:30:22.386 "zcopy": false, 00:30:22.386 "get_zone_info": false, 00:30:22.386 "zone_management": false, 00:30:22.386 "zone_append": false, 00:30:22.386 "compare": false, 00:30:22.386 "compare_and_write": false, 00:30:22.386 "abort": false, 00:30:22.386 "seek_hole": true, 00:30:22.386 "seek_data": true, 00:30:22.386 "copy": false, 00:30:22.386 "nvme_iov_md": false 00:30:22.386 }, 00:30:22.386 "driver_specific": { 00:30:22.386 "lvol": { 00:30:22.386 "lvol_store_uuid": "a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e", 00:30:22.386 "base_bdev": "aio_bdev", 00:30:22.386 "thin_provision": false, 00:30:22.386 "num_allocated_clusters": 38, 00:30:22.386 "snapshot": false, 00:30:22.386 "clone": false, 00:30:22.386 "esnap_clone": false 00:30:22.386 } 00:30:22.386 } 00:30:22.386 } 00:30:22.386 ] 00:30:22.386 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:22.386 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:22.386 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:22.645 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:22.645 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:22.645 17:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:22.645 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:22.645 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:22.904 [2024-12-09 17:40:49.326933] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:22.904 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:22.904 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:22.904 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:22.904 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:22.904 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.904 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:22.904 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.904 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:22.904 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.904 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:22.904 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:22.904 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:23.163 request: 00:30:23.163 { 00:30:23.163 "uuid": "a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e", 00:30:23.163 "method": "bdev_lvol_get_lvstores", 00:30:23.163 "req_id": 1 00:30:23.163 } 00:30:23.163 Got JSON-RPC error response 00:30:23.163 response: 00:30:23.163 { 00:30:23.163 "code": -19, 00:30:23.163 "message": "No such device" 00:30:23.163 } 00:30:23.163 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:23.163 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:23.163 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:23.163 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:23.163 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:23.421 aio_bdev 00:30:23.421 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a9bfc44e-2fb4-47ba-b506-a625b4878736 00:30:23.421 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a9bfc44e-2fb4-47ba-b506-a625b4878736 00:30:23.421 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:23.421 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:23.421 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:23.421 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:23.421 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:23.421 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a9bfc44e-2fb4-47ba-b506-a625b4878736 -t 2000 00:30:23.679 [ 00:30:23.680 { 00:30:23.680 "name": "a9bfc44e-2fb4-47ba-b506-a625b4878736", 00:30:23.680 "aliases": [ 00:30:23.680 "lvs/lvol" 00:30:23.680 ], 00:30:23.680 "product_name": "Logical Volume", 00:30:23.680 "block_size": 4096, 00:30:23.680 "num_blocks": 38912, 00:30:23.680 "uuid": "a9bfc44e-2fb4-47ba-b506-a625b4878736", 00:30:23.680 "assigned_rate_limits": { 00:30:23.680 "rw_ios_per_sec": 0, 00:30:23.680 "rw_mbytes_per_sec": 0, 00:30:23.680 "r_mbytes_per_sec": 0, 00:30:23.680 "w_mbytes_per_sec": 0 00:30:23.680 }, 00:30:23.680 "claimed": false, 00:30:23.680 "zoned": false, 00:30:23.680 "supported_io_types": { 00:30:23.680 "read": true, 00:30:23.680 "write": true, 00:30:23.680 "unmap": true, 00:30:23.680 "flush": false, 00:30:23.680 "reset": true, 00:30:23.680 "nvme_admin": false, 00:30:23.680 "nvme_io": false, 00:30:23.680 "nvme_io_md": false, 00:30:23.680 "write_zeroes": true, 00:30:23.680 "zcopy": false, 00:30:23.680 "get_zone_info": false, 00:30:23.680 "zone_management": false, 00:30:23.680 "zone_append": false, 00:30:23.680 "compare": false, 00:30:23.680 "compare_and_write": false, 00:30:23.680 "abort": false, 00:30:23.680 "seek_hole": true, 00:30:23.680 "seek_data": true, 00:30:23.680 "copy": false, 00:30:23.680 "nvme_iov_md": false 00:30:23.680 }, 00:30:23.680 "driver_specific": { 00:30:23.680 "lvol": { 00:30:23.680 "lvol_store_uuid": "a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e", 00:30:23.680 "base_bdev": "aio_bdev", 00:30:23.680 "thin_provision": false, 00:30:23.680 "num_allocated_clusters": 38, 00:30:23.680 "snapshot": false, 00:30:23.680 "clone": false, 00:30:23.680 "esnap_clone": false 00:30:23.680 } 00:30:23.680 } 00:30:23.680 } 00:30:23.680 ] 00:30:23.680 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:23.680 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:23.680 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:23.939 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:23.939 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:23.939 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:24.197 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:24.197 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9bfc44e-2fb4-47ba-b506-a625b4878736 00:30:24.197 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a9e961a1-1649-4b5b-94b7-9b9fc1c0e29e 00:30:24.513 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:24.803 00:30:24.803 real 0m16.806s 00:30:24.803 user 0m34.247s 00:30:24.803 sys 0m3.727s 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:24.803 ************************************ 00:30:24.803 END TEST lvs_grow_dirty 00:30:24.803 ************************************ 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:24.803 nvmf_trace.0 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.803 rmmod nvme_tcp 00:30:24.803 rmmod nvme_fabrics 00:30:24.803 rmmod nvme_keyring 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2100183 ']' 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2100183 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2100183 ']' 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2100183 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2100183 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:24.803 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2100183' 00:30:24.803 killing process with pid 2100183 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2100183 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2100183 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.071 17:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.606 00:30:27.606 real 0m41.759s 00:30:27.606 user 0m52.131s 00:30:27.606 sys 0m10.050s 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:27.606 ************************************ 00:30:27.606 END TEST nvmf_lvs_grow 00:30:27.606 ************************************ 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:27.606 ************************************ 00:30:27.606 START TEST nvmf_bdev_io_wait 00:30:27.606 ************************************ 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:27.606 * Looking for test storage... 00:30:27.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:27.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.606 --rc genhtml_branch_coverage=1 00:30:27.606 --rc genhtml_function_coverage=1 00:30:27.606 --rc genhtml_legend=1 00:30:27.606 --rc geninfo_all_blocks=1 00:30:27.606 --rc geninfo_unexecuted_blocks=1 00:30:27.606 00:30:27.606 ' 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:27.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.606 --rc genhtml_branch_coverage=1 00:30:27.606 --rc genhtml_function_coverage=1 00:30:27.606 --rc genhtml_legend=1 00:30:27.606 --rc geninfo_all_blocks=1 00:30:27.606 --rc geninfo_unexecuted_blocks=1 00:30:27.606 00:30:27.606 ' 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:27.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.606 --rc genhtml_branch_coverage=1 00:30:27.606 --rc genhtml_function_coverage=1 00:30:27.606 --rc genhtml_legend=1 00:30:27.606 --rc geninfo_all_blocks=1 00:30:27.606 --rc geninfo_unexecuted_blocks=1 00:30:27.606 00:30:27.606 ' 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:27.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.606 --rc genhtml_branch_coverage=1 00:30:27.606 --rc genhtml_function_coverage=1 00:30:27.606 --rc genhtml_legend=1 00:30:27.606 --rc geninfo_all_blocks=1 00:30:27.606 --rc geninfo_unexecuted_blocks=1 00:30:27.606 00:30:27.606 ' 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.606 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.607 17:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.880 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:32.881 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:32.881 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:32.881 Found net devices under 0000:af:00.0: cvl_0_0 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:32.881 Found net devices under 0000:af:00.1: cvl_0_1 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:32.881 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:33.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:30:33.141 00:30:33.141 --- 10.0.0.2 ping statistics --- 00:30:33.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.141 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:30:33.141 00:30:33.141 --- 10.0.0.1 ping statistics --- 00:30:33.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.141 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:33.141 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:33.400 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:33.400 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2104220 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2104220 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2104220 ']' 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:33.401 [2024-12-09 17:40:59.745806] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:33.401 [2024-12-09 17:40:59.746736] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:30:33.401 [2024-12-09 17:40:59.746771] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.401 [2024-12-09 17:40:59.824223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:33.401 [2024-12-09 17:40:59.866352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.401 [2024-12-09 17:40:59.866388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.401 [2024-12-09 17:40:59.866395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.401 [2024-12-09 17:40:59.866401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.401 [2024-12-09 17:40:59.866405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.401 [2024-12-09 17:40:59.867726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.401 [2024-12-09 17:40:59.867834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:33.401 [2024-12-09 17:40:59.867941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.401 [2024-12-09 17:40:59.867942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:33.401 [2024-12-09 17:40:59.868200] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:33.401 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:33.660 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.660 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:33.660 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.660 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:33.660 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.660 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:33.660 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.660 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:33.660 [2024-12-09 17:41:00.011821] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:33.660 [2024-12-09 17:41:00.012029] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:33.660 [2024-12-09 17:41:00.012344] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:33.660 [2024-12-09 17:41:00.012541] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:33.660 [2024-12-09 17:41:00.024362] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:33.660 Malloc0 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.660 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:33.660 [2024-12-09 17:41:00.096778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2104249 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2104251 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:33.661 { 00:30:33.661 "params": { 00:30:33.661 "name": "Nvme$subsystem", 00:30:33.661 "trtype": "$TEST_TRANSPORT", 00:30:33.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:33.661 "adrfam": "ipv4", 00:30:33.661 "trsvcid": "$NVMF_PORT", 00:30:33.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:33.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:33.661 "hdgst": ${hdgst:-false}, 00:30:33.661 "ddgst": ${ddgst:-false} 00:30:33.661 }, 00:30:33.661 "method": "bdev_nvme_attach_controller" 00:30:33.661 } 00:30:33.661 EOF 00:30:33.661 )") 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2104253 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:33.661 { 00:30:33.661 "params": { 00:30:33.661 "name": "Nvme$subsystem", 00:30:33.661 "trtype": "$TEST_TRANSPORT", 00:30:33.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:33.661 "adrfam": "ipv4", 00:30:33.661 "trsvcid": "$NVMF_PORT", 00:30:33.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:33.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:33.661 "hdgst": ${hdgst:-false}, 00:30:33.661 "ddgst": ${ddgst:-false} 00:30:33.661 }, 00:30:33.661 "method": "bdev_nvme_attach_controller" 00:30:33.661 } 00:30:33.661 EOF 00:30:33.661 )") 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2104256 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:33.661 { 00:30:33.661 "params": { 00:30:33.661 "name": "Nvme$subsystem", 00:30:33.661 "trtype": "$TEST_TRANSPORT", 00:30:33.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:33.661 "adrfam": "ipv4", 00:30:33.661 "trsvcid": "$NVMF_PORT", 00:30:33.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:33.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:33.661 "hdgst": ${hdgst:-false}, 00:30:33.661 "ddgst": ${ddgst:-false} 00:30:33.661 }, 00:30:33.661 "method": "bdev_nvme_attach_controller" 00:30:33.661 } 00:30:33.661 EOF 00:30:33.661 )") 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:33.661 { 00:30:33.661 "params": { 00:30:33.661 "name": "Nvme$subsystem", 00:30:33.661 "trtype": "$TEST_TRANSPORT", 00:30:33.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:33.661 "adrfam": "ipv4", 00:30:33.661 "trsvcid": "$NVMF_PORT", 00:30:33.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:33.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:33.661 "hdgst": ${hdgst:-false}, 00:30:33.661 "ddgst": ${ddgst:-false} 00:30:33.661 }, 00:30:33.661 "method": "bdev_nvme_attach_controller" 00:30:33.661 } 00:30:33.661 EOF 00:30:33.661 )") 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2104249 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:33.661 "params": { 00:30:33.661 "name": "Nvme1", 00:30:33.661 "trtype": "tcp", 00:30:33.661 "traddr": "10.0.0.2", 00:30:33.661 "adrfam": "ipv4", 00:30:33.661 "trsvcid": "4420", 00:30:33.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:33.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:33.661 "hdgst": false, 00:30:33.661 "ddgst": false 00:30:33.661 }, 00:30:33.661 "method": "bdev_nvme_attach_controller" 00:30:33.661 }' 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:33.661 "params": { 00:30:33.661 "name": "Nvme1", 00:30:33.661 "trtype": "tcp", 00:30:33.661 "traddr": "10.0.0.2", 00:30:33.661 "adrfam": "ipv4", 00:30:33.661 "trsvcid": "4420", 00:30:33.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:33.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:33.661 "hdgst": false, 00:30:33.661 "ddgst": false 00:30:33.661 }, 00:30:33.661 "method": "bdev_nvme_attach_controller" 00:30:33.661 }' 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:33.661 "params": { 00:30:33.661 "name": "Nvme1", 00:30:33.661 "trtype": "tcp", 00:30:33.661 "traddr": "10.0.0.2", 00:30:33.661 "adrfam": "ipv4", 00:30:33.661 "trsvcid": "4420", 00:30:33.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:33.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:33.661 "hdgst": false, 00:30:33.661 "ddgst": false 00:30:33.661 }, 00:30:33.661 "method": "bdev_nvme_attach_controller" 00:30:33.661 }' 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:33.661 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:33.661 "params": { 00:30:33.661 "name": "Nvme1", 00:30:33.661 "trtype": "tcp", 00:30:33.661 "traddr": "10.0.0.2", 00:30:33.661 "adrfam": "ipv4", 00:30:33.661 "trsvcid": "4420", 00:30:33.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:33.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:33.661 "hdgst": false, 00:30:33.661 "ddgst": false 00:30:33.661 }, 00:30:33.661 "method": "bdev_nvme_attach_controller" 00:30:33.661 }' 00:30:33.661 [2024-12-09 17:41:00.138938] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:30:33.661 [2024-12-09 17:41:00.138987] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:33.661 [2024-12-09 17:41:00.146786] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:30:33.661 [2024-12-09 17:41:00.146828] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:33.661 [2024-12-09 17:41:00.147496] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:30:33.661 [2024-12-09 17:41:00.147535] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:33.661 [2024-12-09 17:41:00.148534] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:30:33.661 [2024-12-09 17:41:00.148571] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:33.920 [2024-12-09 17:41:00.309120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.920 [2024-12-09 17:41:00.354014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:33.920 [2024-12-09 17:41:00.400973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.920 [2024-12-09 17:41:00.445776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:34.178 [2024-12-09 17:41:00.496824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.178 [2024-12-09 17:41:00.542607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:34.178 [2024-12-09 17:41:00.592774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.178 [2024-12-09 17:41:00.639272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:34.436 Running I/O for 1 seconds... 00:30:34.436 Running I/O for 1 seconds... 00:30:34.436 Running I/O for 1 seconds... 00:30:34.436 Running I/O for 1 seconds... 00:30:35.370 244464.00 IOPS, 954.94 MiB/s 00:30:35.370 Latency(us) 00:30:35.370 [2024-12-09T16:41:01.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.370 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:35.370 Nvme1n1 : 1.00 244098.53 953.51 0.00 0.00 521.77 220.40 1482.36 00:30:35.370 [2024-12-09T16:41:01.910Z] =================================================================================================================== 00:30:35.370 [2024-12-09T16:41:01.910Z] Total : 244098.53 953.51 0.00 0.00 521.77 220.40 1482.36 00:30:35.370 8483.00 IOPS, 33.14 MiB/s 00:30:35.370 Latency(us) 00:30:35.370 [2024-12-09T16:41:01.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.370 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:35.370 Nvme1n1 : 1.02 8474.21 33.10 0.00 0.00 14947.20 3510.86 23967.45 00:30:35.370 [2024-12-09T16:41:01.910Z] =================================================================================================================== 00:30:35.370 [2024-12-09T16:41:01.910Z] Total : 8474.21 33.10 0.00 0.00 14947.20 3510.86 23967.45 00:30:35.370 11972.00 IOPS, 46.77 MiB/s 00:30:35.370 Latency(us) 00:30:35.370 [2024-12-09T16:41:01.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.370 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:35.370 Nvme1n1 : 1.01 12018.08 46.95 0.00 0.00 10611.77 4275.44 14854.83 00:30:35.370 [2024-12-09T16:41:01.910Z] =================================================================================================================== 00:30:35.370 [2024-12-09T16:41:01.910Z] Total : 12018.08 46.95 0.00 0.00 10611.77 4275.44 14854.83 00:30:35.628 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2104251 00:30:35.628 9067.00 IOPS, 35.42 MiB/s 00:30:35.628 Latency(us) 00:30:35.628 [2024-12-09T16:41:02.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.628 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:35.628 Nvme1n1 : 1.01 9201.85 35.94 0.00 0.00 13883.52 2652.65 31457.28 00:30:35.628 [2024-12-09T16:41:02.168Z] =================================================================================================================== 00:30:35.628 [2024-12-09T16:41:02.168Z] Total : 9201.85 35.94 0.00 0.00 13883.52 2652.65 31457.28 00:30:35.628 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2104253 00:30:35.628 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2104256 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:35.628 rmmod nvme_tcp 00:30:35.628 rmmod nvme_fabrics 00:30:35.628 rmmod nvme_keyring 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2104220 ']' 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2104220 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2104220 ']' 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2104220 00:30:35.628 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2104220 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2104220' 00:30:35.887 killing process with pid 2104220 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2104220 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2104220 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.887 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.423 00:30:38.423 real 0m10.800s 00:30:38.423 user 0m15.659s 00:30:38.423 sys 0m6.378s 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:38.423 ************************************ 00:30:38.423 END TEST nvmf_bdev_io_wait 00:30:38.423 ************************************ 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:38.423 ************************************ 00:30:38.423 START TEST nvmf_queue_depth 00:30:38.423 ************************************ 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:38.423 * Looking for test storage... 00:30:38.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.423 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.438 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.438 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.438 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:38.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.439 --rc genhtml_branch_coverage=1 00:30:38.439 --rc genhtml_function_coverage=1 00:30:38.439 --rc genhtml_legend=1 00:30:38.439 --rc geninfo_all_blocks=1 00:30:38.439 --rc geninfo_unexecuted_blocks=1 00:30:38.439 00:30:38.439 ' 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:38.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.439 --rc genhtml_branch_coverage=1 00:30:38.439 --rc genhtml_function_coverage=1 00:30:38.439 --rc genhtml_legend=1 00:30:38.439 --rc geninfo_all_blocks=1 00:30:38.439 --rc geninfo_unexecuted_blocks=1 00:30:38.439 00:30:38.439 ' 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:38.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.439 --rc genhtml_branch_coverage=1 00:30:38.439 --rc genhtml_function_coverage=1 00:30:38.439 --rc genhtml_legend=1 00:30:38.439 --rc geninfo_all_blocks=1 00:30:38.439 --rc geninfo_unexecuted_blocks=1 00:30:38.439 00:30:38.439 ' 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:38.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.439 --rc genhtml_branch_coverage=1 00:30:38.439 --rc genhtml_function_coverage=1 00:30:38.439 --rc genhtml_legend=1 00:30:38.439 --rc geninfo_all_blocks=1 00:30:38.439 --rc geninfo_unexecuted_blocks=1 00:30:38.439 00:30:38.439 ' 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:38.439 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.440 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:43.716 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:43.716 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:43.716 Found net devices under 0000:af:00.0: cvl_0_0 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.716 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:43.716 Found net devices under 0000:af:00.1: cvl_0_1 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.976 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:44.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:30:44.236 00:30:44.236 --- 10.0.0.2 ping statistics --- 00:30:44.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.236 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:30:44.236 00:30:44.236 --- 10.0.0.1 ping statistics --- 00:30:44.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.236 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2107987 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2107987 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2107987 ']' 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.236 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:44.236 [2024-12-09 17:41:10.642017] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:44.236 [2024-12-09 17:41:10.642953] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:30:44.236 [2024-12-09 17:41:10.642988] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.236 [2024-12-09 17:41:10.725522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.236 [2024-12-09 17:41:10.764541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.236 [2024-12-09 17:41:10.764573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.236 [2024-12-09 17:41:10.764581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.236 [2024-12-09 17:41:10.764587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.236 [2024-12-09 17:41:10.764593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.236 [2024-12-09 17:41:10.765029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.496 [2024-12-09 17:41:10.832758] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:44.496 [2024-12-09 17:41:10.832978] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 [2024-12-09 17:41:10.909699] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 Malloc0 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 [2024-12-09 17:41:10.985859] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2108197 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2108197 /var/tmp/bdevperf.sock 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2108197 ']' 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:44.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.496 17:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:44.756 [2024-12-09 17:41:11.038907] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:30:44.756 [2024-12-09 17:41:11.038953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108197 ] 00:30:44.756 [2024-12-09 17:41:11.094957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.756 [2024-12-09 17:41:11.135037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.756 17:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.756 17:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:44.756 17:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.756 17:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.756 17:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.015 NVMe0n1 00:30:45.015 17:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.015 17:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:45.015 Running I/O for 10 seconds... 00:30:47.331 12288.00 IOPS, 48.00 MiB/s [2024-12-09T16:41:14.809Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-09T16:41:15.744Z] 12340.67 IOPS, 48.21 MiB/s [2024-12-09T16:41:16.681Z] 12434.50 IOPS, 48.57 MiB/s [2024-12-09T16:41:17.617Z] 12484.20 IOPS, 48.77 MiB/s [2024-12-09T16:41:18.996Z] 12508.17 IOPS, 48.86 MiB/s [2024-12-09T16:41:19.933Z] 12570.86 IOPS, 49.10 MiB/s [2024-12-09T16:41:20.871Z] 12578.50 IOPS, 49.13 MiB/s [2024-12-09T16:41:21.808Z] 12590.22 IOPS, 49.18 MiB/s [2024-12-09T16:41:21.808Z] 12597.50 IOPS, 49.21 MiB/s 00:30:55.268 Latency(us) 00:30:55.268 [2024-12-09T16:41:21.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.268 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:55.268 Verification LBA range: start 0x0 length 0x4000 00:30:55.268 NVMe0n1 : 10.05 12632.36 49.35 0.00 0.00 80811.73 17351.44 51430.16 00:30:55.268 [2024-12-09T16:41:21.808Z] =================================================================================================================== 00:30:55.268 [2024-12-09T16:41:21.808Z] Total : 12632.36 49.35 0.00 0.00 80811.73 17351.44 51430.16 00:30:55.268 { 00:30:55.268 "results": [ 00:30:55.268 { 00:30:55.268 "job": "NVMe0n1", 00:30:55.268 "core_mask": "0x1", 00:30:55.268 "workload": "verify", 00:30:55.268 "status": "finished", 00:30:55.268 "verify_range": { 00:30:55.268 "start": 0, 00:30:55.268 "length": 16384 00:30:55.268 }, 00:30:55.268 "queue_depth": 1024, 00:30:55.268 "io_size": 4096, 00:30:55.268 "runtime": 10.052515, 00:30:55.268 "iops": 12632.361155392457, 00:30:55.268 "mibps": 49.34516076325178, 00:30:55.268 "io_failed": 0, 00:30:55.268 "io_timeout": 0, 00:30:55.268 "avg_latency_us": 80811.73422779312, 00:30:55.268 "min_latency_us": 17351.43619047619, 00:30:55.268 "max_latency_us": 51430.15619047619 00:30:55.268 } 00:30:55.268 ], 00:30:55.268 "core_count": 1 00:30:55.268 } 00:30:55.268 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2108197 00:30:55.268 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2108197 ']' 00:30:55.268 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2108197 00:30:55.268 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:55.268 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.268 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2108197 00:30:55.268 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:55.268 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:55.268 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2108197' 00:30:55.268 killing process with pid 2108197 00:30:55.268 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2108197 00:30:55.268 Received shutdown signal, test time was about 10.000000 seconds 00:30:55.268 00:30:55.268 Latency(us) 00:30:55.268 [2024-12-09T16:41:21.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.268 [2024-12-09T16:41:21.808Z] =================================================================================================================== 00:30:55.268 [2024-12-09T16:41:21.809Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:55.269 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2108197 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:55.527 rmmod nvme_tcp 00:30:55.527 rmmod nvme_fabrics 00:30:55.527 rmmod nvme_keyring 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2107987 ']' 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2107987 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2107987 ']' 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2107987 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2107987 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2107987' 00:30:55.527 killing process with pid 2107987 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2107987 00:30:55.527 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2107987 00:30:55.786 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:55.786 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:55.786 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:55.786 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:55.786 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:55.786 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:55.787 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:55.787 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:55.787 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:55.787 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.787 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.787 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.693 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:57.952 00:30:57.952 real 0m19.737s 00:30:57.952 user 0m22.895s 00:30:57.952 sys 0m6.198s 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:57.952 ************************************ 00:30:57.952 END TEST nvmf_queue_depth 00:30:57.952 ************************************ 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:57.952 ************************************ 00:30:57.952 START TEST nvmf_target_multipath 00:30:57.952 ************************************ 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:57.952 * Looking for test storage... 00:30:57.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.952 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.953 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:58.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.212 --rc genhtml_branch_coverage=1 00:30:58.212 --rc genhtml_function_coverage=1 00:30:58.212 --rc genhtml_legend=1 00:30:58.212 --rc geninfo_all_blocks=1 00:30:58.212 --rc geninfo_unexecuted_blocks=1 00:30:58.212 00:30:58.212 ' 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:58.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.212 --rc genhtml_branch_coverage=1 00:30:58.212 --rc genhtml_function_coverage=1 00:30:58.212 --rc genhtml_legend=1 00:30:58.212 --rc geninfo_all_blocks=1 00:30:58.212 --rc geninfo_unexecuted_blocks=1 00:30:58.212 00:30:58.212 ' 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:58.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.212 --rc genhtml_branch_coverage=1 00:30:58.212 --rc genhtml_function_coverage=1 00:30:58.212 --rc genhtml_legend=1 00:30:58.212 --rc geninfo_all_blocks=1 00:30:58.212 --rc geninfo_unexecuted_blocks=1 00:30:58.212 00:30:58.212 ' 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:58.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.212 --rc genhtml_branch_coverage=1 00:30:58.212 --rc genhtml_function_coverage=1 00:30:58.212 --rc genhtml_legend=1 00:30:58.212 --rc geninfo_all_blocks=1 00:30:58.212 --rc geninfo_unexecuted_blocks=1 00:30:58.212 00:30:58.212 ' 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.212 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:58.213 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:04.795 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:04.795 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:04.795 Found net devices under 0000:af:00.0: cvl_0_0 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:04.795 Found net devices under 0000:af:00.1: cvl_0_1 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:04.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:31:04.795 00:31:04.795 --- 10.0.0.2 ping statistics --- 00:31:04.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.795 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:31:04.795 00:31:04.795 --- 10.0.0.1 ping statistics --- 00:31:04.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.795 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:04.795 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:04.796 only one NIC for nvmf test 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:04.796 rmmod nvme_tcp 00:31:04.796 rmmod nvme_fabrics 00:31:04.796 rmmod nvme_keyring 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.796 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.177 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:06.177 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:06.177 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:06.177 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.177 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:06.177 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.177 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:06.177 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.177 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.177 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.177 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:06.177 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:06.178 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:06.178 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:06.178 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:06.178 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:06.178 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:06.437 00:31:06.437 real 0m8.414s 00:31:06.437 user 0m1.808s 00:31:06.437 sys 0m4.514s 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:06.437 ************************************ 00:31:06.437 END TEST nvmf_target_multipath 00:31:06.437 ************************************ 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:06.437 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:06.438 ************************************ 00:31:06.438 START TEST nvmf_zcopy 00:31:06.438 ************************************ 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:06.438 * Looking for test storage... 00:31:06.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.438 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:06.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.699 --rc genhtml_branch_coverage=1 00:31:06.699 --rc genhtml_function_coverage=1 00:31:06.699 --rc genhtml_legend=1 00:31:06.699 --rc geninfo_all_blocks=1 00:31:06.699 --rc geninfo_unexecuted_blocks=1 00:31:06.699 00:31:06.699 ' 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:06.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.699 --rc genhtml_branch_coverage=1 00:31:06.699 --rc genhtml_function_coverage=1 00:31:06.699 --rc genhtml_legend=1 00:31:06.699 --rc geninfo_all_blocks=1 00:31:06.699 --rc geninfo_unexecuted_blocks=1 00:31:06.699 00:31:06.699 ' 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:06.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.699 --rc genhtml_branch_coverage=1 00:31:06.699 --rc genhtml_function_coverage=1 00:31:06.699 --rc genhtml_legend=1 00:31:06.699 --rc geninfo_all_blocks=1 00:31:06.699 --rc geninfo_unexecuted_blocks=1 00:31:06.699 00:31:06.699 ' 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:06.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.699 --rc genhtml_branch_coverage=1 00:31:06.699 --rc genhtml_function_coverage=1 00:31:06.699 --rc genhtml_legend=1 00:31:06.699 --rc geninfo_all_blocks=1 00:31:06.699 --rc geninfo_unexecuted_blocks=1 00:31:06.699 00:31:06.699 ' 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.699 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.699 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.700 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:06.700 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:06.700 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:06.700 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.050 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.050 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:12.050 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:12.050 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:12.050 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:12.050 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:12.050 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:12.050 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:12.050 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:12.050 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.051 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:12.311 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:12.311 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:12.311 Found net devices under 0000:af:00.0: cvl_0_0 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:12.311 Found net devices under 0000:af:00.1: cvl_0_1 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.311 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.312 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:12.312 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.312 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.312 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.312 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:12.312 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:12.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:31:12.312 00:31:12.312 --- 10.0.0.2 ping statistics --- 00:31:12.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.312 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:31:12.312 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:31:12.571 00:31:12.571 --- 10.0.0.1 ping statistics --- 00:31:12.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.571 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:31:12.571 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.571 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2116694 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2116694 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2116694 ']' 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:12.572 17:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.572 [2024-12-09 17:41:38.956854] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:12.572 [2024-12-09 17:41:38.957760] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:31:12.572 [2024-12-09 17:41:38.957796] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.572 [2024-12-09 17:41:39.032254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.572 [2024-12-09 17:41:39.070713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.572 [2024-12-09 17:41:39.070747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.572 [2024-12-09 17:41:39.070753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:12.572 [2024-12-09 17:41:39.070759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:12.572 [2024-12-09 17:41:39.070764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.572 [2024-12-09 17:41:39.071237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.832 [2024-12-09 17:41:39.137592] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:12.832 [2024-12-09 17:41:39.137807] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.832 [2024-12-09 17:41:39.203893] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.832 [2024-12-09 17:41:39.232091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.832 malloc0 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:12.832 { 00:31:12.832 "params": { 00:31:12.832 "name": "Nvme$subsystem", 00:31:12.832 "trtype": "$TEST_TRANSPORT", 00:31:12.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.832 "adrfam": "ipv4", 00:31:12.832 "trsvcid": "$NVMF_PORT", 00:31:12.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.832 "hdgst": ${hdgst:-false}, 00:31:12.832 "ddgst": ${ddgst:-false} 00:31:12.832 }, 00:31:12.832 "method": "bdev_nvme_attach_controller" 00:31:12.832 } 00:31:12.832 EOF 00:31:12.832 )") 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:12.832 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:12.832 "params": { 00:31:12.832 "name": "Nvme1", 00:31:12.832 "trtype": "tcp", 00:31:12.832 "traddr": "10.0.0.2", 00:31:12.832 "adrfam": "ipv4", 00:31:12.832 "trsvcid": "4420", 00:31:12.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:12.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:12.832 "hdgst": false, 00:31:12.832 "ddgst": false 00:31:12.832 }, 00:31:12.832 "method": "bdev_nvme_attach_controller" 00:31:12.832 }' 00:31:12.832 [2024-12-09 17:41:39.321594] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:31:12.832 [2024-12-09 17:41:39.321635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116782 ] 00:31:13.092 [2024-12-09 17:41:39.395123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.092 [2024-12-09 17:41:39.434320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.351 Running I/O for 10 seconds... 00:31:15.666 8364.00 IOPS, 65.34 MiB/s [2024-12-09T16:41:43.144Z] 8516.00 IOPS, 66.53 MiB/s [2024-12-09T16:41:44.082Z] 8545.00 IOPS, 66.76 MiB/s [2024-12-09T16:41:45.020Z] 8577.75 IOPS, 67.01 MiB/s [2024-12-09T16:41:45.958Z] 8597.20 IOPS, 67.17 MiB/s [2024-12-09T16:41:46.896Z] 8620.83 IOPS, 67.35 MiB/s [2024-12-09T16:41:47.834Z] 8637.14 IOPS, 67.48 MiB/s [2024-12-09T16:41:49.213Z] 8649.75 IOPS, 67.58 MiB/s [2024-12-09T16:41:50.151Z] 8653.00 IOPS, 67.60 MiB/s [2024-12-09T16:41:50.151Z] 8657.70 IOPS, 67.64 MiB/s 00:31:23.611 Latency(us) 00:31:23.611 [2024-12-09T16:41:50.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.612 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:23.612 Verification LBA range: start 0x0 length 0x1000 00:31:23.612 Nvme1n1 : 10.01 8661.44 67.67 0.00 0.00 14735.65 920.62 21221.18 00:31:23.612 [2024-12-09T16:41:50.152Z] =================================================================================================================== 00:31:23.612 [2024-12-09T16:41:50.152Z] Total : 8661.44 67.67 0.00 0.00 14735.65 920.62 21221.18 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2118489 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:23.612 { 00:31:23.612 "params": { 00:31:23.612 "name": "Nvme$subsystem", 00:31:23.612 "trtype": "$TEST_TRANSPORT", 00:31:23.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.612 "adrfam": "ipv4", 00:31:23.612 "trsvcid": "$NVMF_PORT", 00:31:23.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.612 "hdgst": ${hdgst:-false}, 00:31:23.612 "ddgst": ${ddgst:-false} 00:31:23.612 }, 00:31:23.612 "method": "bdev_nvme_attach_controller" 00:31:23.612 } 00:31:23.612 EOF 00:31:23.612 )") 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:23.612 [2024-12-09 17:41:49.987569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:49.987598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:23.612 17:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:23.612 "params": { 00:31:23.612 "name": "Nvme1", 00:31:23.612 "trtype": "tcp", 00:31:23.612 "traddr": "10.0.0.2", 00:31:23.612 "adrfam": "ipv4", 00:31:23.612 "trsvcid": "4420", 00:31:23.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.612 "hdgst": false, 00:31:23.612 "ddgst": false 00:31:23.612 }, 00:31:23.612 "method": "bdev_nvme_attach_controller" 00:31:23.612 }' 00:31:23.612 [2024-12-09 17:41:49.999537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:49.999549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.011538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:50.011548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.023542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:50.023558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.025737] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:31:23.612 [2024-12-09 17:41:50.025777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118489 ] 00:31:23.612 [2024-12-09 17:41:50.031537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:50.031547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.043536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:50.043545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.055540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:50.055555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.067534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:50.067544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.079532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:50.079540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.091757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:50.091834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.102621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.612 [2024-12-09 17:41:50.103544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:50.103561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.115537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:50.115552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.127534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:50.127544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.139534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.612 [2024-12-09 17:41:50.139546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.612 [2024-12-09 17:41:50.143563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.872 [2024-12-09 17:41:50.151534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.151545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.163546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.163565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.175539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.175552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.187537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.187549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.199537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.199549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.211538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.211550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.223533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.223542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.235543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.235562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.247542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.247557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.259537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.259550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.271533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.271541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.283533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.283541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.295537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.295550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.307538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.307552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.319573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.319588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.362394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.362412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.371537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.371549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 Running I/O for 5 seconds... 00:31:23.872 [2024-12-09 17:41:50.385029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.385053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.872 [2024-12-09 17:41:50.400074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.872 [2024-12-09 17:41:50.400099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.415489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.415508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.426669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.426688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.441204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.441223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.456465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.456483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.470990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.471009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.485420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.485439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.499356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.499374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.513371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.513391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.528145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.528165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.543013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.543034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.557705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.557725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.572548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.572568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.588132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.588149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.603009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.603028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.617066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.617085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.631704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.631723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.643562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.643580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.132 [2024-12-09 17:41:50.657456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.132 [2024-12-09 17:41:50.657474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.392 [2024-12-09 17:41:50.672549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.392 [2024-12-09 17:41:50.672568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.392 [2024-12-09 17:41:50.687124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.392 [2024-12-09 17:41:50.687143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.392 [2024-12-09 17:41:50.701671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.392 [2024-12-09 17:41:50.701693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.392 [2024-12-09 17:41:50.716315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.392 [2024-12-09 17:41:50.716333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.392 [2024-12-09 17:41:50.732008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.392 [2024-12-09 17:41:50.732028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.392 [2024-12-09 17:41:50.747685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.392 [2024-12-09 17:41:50.747708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.392 [2024-12-09 17:41:50.759056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.392 [2024-12-09 17:41:50.759075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.392 [2024-12-09 17:41:50.773234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.392 [2024-12-09 17:41:50.773254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.392 [2024-12-09 17:41:50.787779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.392 [2024-12-09 17:41:50.787798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.392 [2024-12-09 17:41:50.799750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.392 [2024-12-09 17:41:50.799769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.392 [2024-12-09 17:41:50.813064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.392 [2024-12-09 17:41:50.813083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.392 [2024-12-09 17:41:50.827745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.393 [2024-12-09 17:41:50.827766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.393 [2024-12-09 17:41:50.841444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.393 [2024-12-09 17:41:50.841463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.393 [2024-12-09 17:41:50.855848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.393 [2024-12-09 17:41:50.855867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.393 [2024-12-09 17:41:50.870831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.393 [2024-12-09 17:41:50.870850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.393 [2024-12-09 17:41:50.885773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.393 [2024-12-09 17:41:50.885792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.393 [2024-12-09 17:41:50.900005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.393 [2024-12-09 17:41:50.900024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.393 [2024-12-09 17:41:50.913243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.393 [2024-12-09 17:41:50.913262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.393 [2024-12-09 17:41:50.928303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.393 [2024-12-09 17:41:50.928321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.652 [2024-12-09 17:41:50.943554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.652 [2024-12-09 17:41:50.943573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.652 [2024-12-09 17:41:50.956462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.652 [2024-12-09 17:41:50.956480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.652 [2024-12-09 17:41:50.971153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.652 [2024-12-09 17:41:50.971176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.652 [2024-12-09 17:41:50.985095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.652 [2024-12-09 17:41:50.985114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.652 [2024-12-09 17:41:50.999600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.652 [2024-12-09 17:41:50.999619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.652 [2024-12-09 17:41:51.012524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.652 [2024-12-09 17:41:51.012542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.652 [2024-12-09 17:41:51.027579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.652 [2024-12-09 17:41:51.027597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.652 [2024-12-09 17:41:51.041891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.652 [2024-12-09 17:41:51.041909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.652 [2024-12-09 17:41:51.056623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.652 [2024-12-09 17:41:51.056641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.652 [2024-12-09 17:41:51.072003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.652 [2024-12-09 17:41:51.072021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.652 [2024-12-09 17:41:51.087727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.652 [2024-12-09 17:41:51.087746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.653 [2024-12-09 17:41:51.101295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.653 [2024-12-09 17:41:51.101313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.653 [2024-12-09 17:41:51.115871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.653 [2024-12-09 17:41:51.115889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.653 [2024-12-09 17:41:51.131313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.653 [2024-12-09 17:41:51.131332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.653 [2024-12-09 17:41:51.144798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.653 [2024-12-09 17:41:51.144817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.653 [2024-12-09 17:41:51.159846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.653 [2024-12-09 17:41:51.159864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.653 [2024-12-09 17:41:51.172687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.653 [2024-12-09 17:41:51.172705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.653 [2024-12-09 17:41:51.183650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.653 [2024-12-09 17:41:51.183668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.912 [2024-12-09 17:41:51.197491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.912 [2024-12-09 17:41:51.197509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.912 [2024-12-09 17:41:51.212430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.912 [2024-12-09 17:41:51.212459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.912 [2024-12-09 17:41:51.227158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.912 [2024-12-09 17:41:51.227182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.912 [2024-12-09 17:41:51.238293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.912 [2024-12-09 17:41:51.238311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.912 [2024-12-09 17:41:51.252918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.912 [2024-12-09 17:41:51.252936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.912 [2024-12-09 17:41:51.267927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.912 [2024-12-09 17:41:51.267948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.912 [2024-12-09 17:41:51.283338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.912 [2024-12-09 17:41:51.283356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.912 [2024-12-09 17:41:51.297437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.912 [2024-12-09 17:41:51.297455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.912 [2024-12-09 17:41:51.312400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.912 [2024-12-09 17:41:51.312418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.912 [2024-12-09 17:41:51.327199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.912 [2024-12-09 17:41:51.327217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.912 [2024-12-09 17:41:51.341287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.912 [2024-12-09 17:41:51.341305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.913 [2024-12-09 17:41:51.355778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.913 [2024-12-09 17:41:51.355796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.913 [2024-12-09 17:41:51.366317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.913 [2024-12-09 17:41:51.366335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.913 16901.00 IOPS, 132.04 MiB/s [2024-12-09T16:41:51.453Z] [2024-12-09 17:41:51.380899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.913 [2024-12-09 17:41:51.380917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.913 [2024-12-09 17:41:51.395369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.913 [2024-12-09 17:41:51.395387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.913 [2024-12-09 17:41:51.409048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.913 [2024-12-09 17:41:51.409067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.913 [2024-12-09 17:41:51.424178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.913 [2024-12-09 17:41:51.424197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.913 [2024-12-09 17:41:51.438727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.913 [2024-12-09 17:41:51.438746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.172 [2024-12-09 17:41:51.453517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.172 [2024-12-09 17:41:51.453535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.172 [2024-12-09 17:41:51.468489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.172 [2024-12-09 17:41:51.468512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.172 [2024-12-09 17:41:51.483721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.172 [2024-12-09 17:41:51.483739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.172 [2024-12-09 17:41:51.496382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.172 [2024-12-09 17:41:51.496401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.172 [2024-12-09 17:41:51.510877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.172 [2024-12-09 17:41:51.510895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.172 [2024-12-09 17:41:51.524880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.172 [2024-12-09 17:41:51.524899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.172 [2024-12-09 17:41:51.535821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.172 [2024-12-09 17:41:51.535839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.172 [2024-12-09 17:41:51.549669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.172 [2024-12-09 17:41:51.549687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.172 [2024-12-09 17:41:51.564659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.172 [2024-12-09 17:41:51.564677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.172 [2024-12-09 17:41:51.579532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.172 [2024-12-09 17:41:51.579551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.173 [2024-12-09 17:41:51.593633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.173 [2024-12-09 17:41:51.593651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.173 [2024-12-09 17:41:51.608572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.173 [2024-12-09 17:41:51.608590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.173 [2024-12-09 17:41:51.623424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.173 [2024-12-09 17:41:51.623442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.173 [2024-12-09 17:41:51.636035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.173 [2024-12-09 17:41:51.636053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.173 [2024-12-09 17:41:51.649393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.173 [2024-12-09 17:41:51.649412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.173 [2024-12-09 17:41:51.664268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.173 [2024-12-09 17:41:51.664286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.173 [2024-12-09 17:41:51.679804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.173 [2024-12-09 17:41:51.679821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.173 [2024-12-09 17:41:51.695441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.173 [2024-12-09 17:41:51.695460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.173 [2024-12-09 17:41:51.709028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.173 [2024-12-09 17:41:51.709046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.432 [2024-12-09 17:41:51.724103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.432 [2024-12-09 17:41:51.724121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.432 [2024-12-09 17:41:51.735411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.432 [2024-12-09 17:41:51.735444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.432 [2024-12-09 17:41:51.749390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.432 [2024-12-09 17:41:51.749408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.432 [2024-12-09 17:41:51.764109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.432 [2024-12-09 17:41:51.764128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.432 [2024-12-09 17:41:51.778895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.432 [2024-12-09 17:41:51.778913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.432 [2024-12-09 17:41:51.793938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.432 [2024-12-09 17:41:51.793958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.432 [2024-12-09 17:41:51.808751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.432 [2024-12-09 17:41:51.808770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.432 [2024-12-09 17:41:51.823667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.432 [2024-12-09 17:41:51.823685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.433 [2024-12-09 17:41:51.834277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.433 [2024-12-09 17:41:51.834296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.433 [2024-12-09 17:41:51.849233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.433 [2024-12-09 17:41:51.849251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.433 [2024-12-09 17:41:51.863893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.433 [2024-12-09 17:41:51.863911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.433 [2024-12-09 17:41:51.879598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.433 [2024-12-09 17:41:51.879617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.433 [2024-12-09 17:41:51.890676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.433 [2024-12-09 17:41:51.890694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.433 [2024-12-09 17:41:51.905399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.433 [2024-12-09 17:41:51.905417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.433 [2024-12-09 17:41:51.920058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.433 [2024-12-09 17:41:51.920081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.433 [2024-12-09 17:41:51.935906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.433 [2024-12-09 17:41:51.935925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.433 [2024-12-09 17:41:51.948095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.433 [2024-12-09 17:41:51.948114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.433 [2024-12-09 17:41:51.961140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.433 [2024-12-09 17:41:51.961160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:51.975793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:51.975812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:51.986732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:51.986750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.001620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.001643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.016669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.016688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.031456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.031476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.042882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.042901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.056911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.056929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.071780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.071799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.082047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.082065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.096621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.096640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.111238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.111258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.124733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.124752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.136144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.136163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.148550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.148569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.163518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.163537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.176348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.176366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.191445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.191464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.205210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.205232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.691 [2024-12-09 17:41:52.219971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.691 [2024-12-09 17:41:52.219989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.235838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.235856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.251462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.251483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.264714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.264734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.280373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.280391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.295151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.295177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.309208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.309226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.324360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.324378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.339993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.340012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.355901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.355925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.371147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.371173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 16819.50 IOPS, 131.40 MiB/s [2024-12-09T16:41:52.618Z] [2024-12-09 17:41:52.385271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.385290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.400080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.400098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.415009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.415028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.429065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.429083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.078 [2024-12-09 17:41:52.444003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.078 [2024-12-09 17:41:52.444021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-09 17:41:52.459213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-09 17:41:52.459232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-09 17:41:52.473388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-09 17:41:52.473406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-09 17:41:52.487686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-09 17:41:52.487704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-09 17:41:52.501241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-09 17:41:52.501259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-09 17:41:52.516177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-09 17:41:52.516196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-09 17:41:52.531203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-09 17:41:52.531223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-09 17:41:52.544132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-09 17:41:52.544150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-09 17:41:52.556827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-09 17:41:52.556845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-09 17:41:52.572056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-09 17:41:52.572074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.354 [2024-12-09 17:41:52.587852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.354 [2024-12-09 17:41:52.587869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.354 [2024-12-09 17:41:52.601431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.354 [2024-12-09 17:41:52.601449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.354 [2024-12-09 17:41:52.616534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.354 [2024-12-09 17:41:52.616552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.354 [2024-12-09 17:41:52.631836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.354 [2024-12-09 17:41:52.631854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.354 [2024-12-09 17:41:52.647816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.354 [2024-12-09 17:41:52.647834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.354 [2024-12-09 17:41:52.663208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.354 [2024-12-09 17:41:52.663227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.354 [2024-12-09 17:41:52.677736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.354 [2024-12-09 17:41:52.677755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.354 [2024-12-09 17:41:52.692047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.354 [2024-12-09 17:41:52.692066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.354 [2024-12-09 17:41:52.707233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.354 [2024-12-09 17:41:52.707256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.354 [2024-12-09 17:41:52.721819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.354 [2024-12-09 17:41:52.721837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.354 [2024-12-09 17:41:52.736786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.355 [2024-12-09 17:41:52.736804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.355 [2024-12-09 17:41:52.751475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.355 [2024-12-09 17:41:52.751493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.355 [2024-12-09 17:41:52.765493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.355 [2024-12-09 17:41:52.765511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.355 [2024-12-09 17:41:52.780622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.355 [2024-12-09 17:41:52.780640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.355 [2024-12-09 17:41:52.795658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.355 [2024-12-09 17:41:52.795676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.355 [2024-12-09 17:41:52.807513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.355 [2024-12-09 17:41:52.807533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.355 [2024-12-09 17:41:52.821584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.355 [2024-12-09 17:41:52.821605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.355 [2024-12-09 17:41:52.836364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.355 [2024-12-09 17:41:52.836383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.355 [2024-12-09 17:41:52.850793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.355 [2024-12-09 17:41:52.850811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.355 [2024-12-09 17:41:52.865458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.355 [2024-12-09 17:41:52.865476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.355 [2024-12-09 17:41:52.880285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.355 [2024-12-09 17:41:52.880302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.614 [2024-12-09 17:41:52.895478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.614 [2024-12-09 17:41:52.895497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.614 [2024-12-09 17:41:52.907654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.614 [2024-12-09 17:41:52.907672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.614 [2024-12-09 17:41:52.921716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.614 [2024-12-09 17:41:52.921734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.614 [2024-12-09 17:41:52.936441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.614 [2024-12-09 17:41:52.936459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.614 [2024-12-09 17:41:52.951470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.614 [2024-12-09 17:41:52.951489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.614 [2024-12-09 17:41:52.965448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.614 [2024-12-09 17:41:52.965465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.614 [2024-12-09 17:41:52.980340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.614 [2024-12-09 17:41:52.980357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.615 [2024-12-09 17:41:52.996114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.615 [2024-12-09 17:41:52.996138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.615 [2024-12-09 17:41:53.010834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.615 [2024-12-09 17:41:53.010853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.615 [2024-12-09 17:41:53.025484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.615 [2024-12-09 17:41:53.025503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.615 [2024-12-09 17:41:53.039878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.615 [2024-12-09 17:41:53.039896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.615 [2024-12-09 17:41:53.055827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.615 [2024-12-09 17:41:53.055845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.615 [2024-12-09 17:41:53.068619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.615 [2024-12-09 17:41:53.068637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.615 [2024-12-09 17:41:53.081123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.615 [2024-12-09 17:41:53.081148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.615 [2024-12-09 17:41:53.095596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.615 [2024-12-09 17:41:53.095616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.615 [2024-12-09 17:41:53.107697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.615 [2024-12-09 17:41:53.107717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.615 [2024-12-09 17:41:53.121568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.615 [2024-12-09 17:41:53.121587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.615 [2024-12-09 17:41:53.136532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.615 [2024-12-09 17:41:53.136552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.615 [2024-12-09 17:41:53.151413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.615 [2024-12-09 17:41:53.151432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.874 [2024-12-09 17:41:53.165217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.874 [2024-12-09 17:41:53.165236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.874 [2024-12-09 17:41:53.179880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.874 [2024-12-09 17:41:53.179898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.195322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.195342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.208882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.208901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.223483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.223503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.237157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.237183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.251879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.251898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.267227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.267246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.280156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.280181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.293287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.293306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.308423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.308442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.323254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.323273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.335640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.335660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.349537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.349563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.364074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.364093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.379631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.379652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 16827.67 IOPS, 131.47 MiB/s [2024-12-09T16:41:53.415Z] [2024-12-09 17:41:53.393268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.393288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.875 [2024-12-09 17:41:53.407775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.875 [2024-12-09 17:41:53.407794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.134 [2024-12-09 17:41:53.419159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.134 [2024-12-09 17:41:53.419183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.134 [2024-12-09 17:41:53.433346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.134 [2024-12-09 17:41:53.433366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.134 [2024-12-09 17:41:53.447990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.134 [2024-12-09 17:41:53.448009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.134 [2024-12-09 17:41:53.463841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.134 [2024-12-09 17:41:53.463860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.134 [2024-12-09 17:41:53.479613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.134 [2024-12-09 17:41:53.479638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.134 [2024-12-09 17:41:53.492172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.134 [2024-12-09 17:41:53.492207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.134 [2024-12-09 17:41:53.507627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.134 [2024-12-09 17:41:53.507647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.134 [2024-12-09 17:41:53.519974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.134 [2024-12-09 17:41:53.519994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.134 [2024-12-09 17:41:53.533329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.134 [2024-12-09 17:41:53.533349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.134 [2024-12-09 17:41:53.547695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.134 [2024-12-09 17:41:53.547714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.135 [2024-12-09 17:41:53.559415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.135 [2024-12-09 17:41:53.559435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.135 [2024-12-09 17:41:53.573428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.135 [2024-12-09 17:41:53.573449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.135 [2024-12-09 17:41:53.588207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.135 [2024-12-09 17:41:53.588227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.135 [2024-12-09 17:41:53.603248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.135 [2024-12-09 17:41:53.603268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.135 [2024-12-09 17:41:53.616614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.135 [2024-12-09 17:41:53.616639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.135 [2024-12-09 17:41:53.631165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.135 [2024-12-09 17:41:53.631191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.135 [2024-12-09 17:41:53.644609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.135 [2024-12-09 17:41:53.644629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.135 [2024-12-09 17:41:53.659601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.135 [2024-12-09 17:41:53.659621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.135 [2024-12-09 17:41:53.672607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.135 [2024-12-09 17:41:53.672627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.394 [2024-12-09 17:41:53.687501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.394 [2024-12-09 17:41:53.687520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.394 [2024-12-09 17:41:53.700700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.394 [2024-12-09 17:41:53.700720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.394 [2024-12-09 17:41:53.715457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.394 [2024-12-09 17:41:53.715479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.394 [2024-12-09 17:41:53.729465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.394 [2024-12-09 17:41:53.729484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.394 [2024-12-09 17:41:53.743764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.394 [2024-12-09 17:41:53.743784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.394 [2024-12-09 17:41:53.754687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.394 [2024-12-09 17:41:53.754707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.394 [2024-12-09 17:41:53.769015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.394 [2024-12-09 17:41:53.769034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.395 [2024-12-09 17:41:53.783387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.395 [2024-12-09 17:41:53.783406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.395 [2024-12-09 17:41:53.794418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.395 [2024-12-09 17:41:53.794438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.395 [2024-12-09 17:41:53.809216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.395 [2024-12-09 17:41:53.809235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.395 [2024-12-09 17:41:53.823765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.395 [2024-12-09 17:41:53.823784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.395 [2024-12-09 17:41:53.836415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.395 [2024-12-09 17:41:53.836435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.395 [2024-12-09 17:41:53.851475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.395 [2024-12-09 17:41:53.851495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.395 [2024-12-09 17:41:53.864370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.395 [2024-12-09 17:41:53.864390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.395 [2024-12-09 17:41:53.879053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.395 [2024-12-09 17:41:53.879072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.395 [2024-12-09 17:41:53.893276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.395 [2024-12-09 17:41:53.893295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.395 [2024-12-09 17:41:53.907896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.395 [2024-12-09 17:41:53.907915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.395 [2024-12-09 17:41:53.923665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.395 [2024-12-09 17:41:53.923684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.654 [2024-12-09 17:41:53.936591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.654 [2024-12-09 17:41:53.936610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.654 [2024-12-09 17:41:53.947443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.654 [2024-12-09 17:41:53.947463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.654 [2024-12-09 17:41:53.961530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.654 [2024-12-09 17:41:53.961549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.654 [2024-12-09 17:41:53.976049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.654 [2024-12-09 17:41:53.976068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.654 [2024-12-09 17:41:53.991268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.654 [2024-12-09 17:41:53.991288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.654 [2024-12-09 17:41:54.004724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.654 [2024-12-09 17:41:54.004743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.654 [2024-12-09 17:41:54.019019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.654 [2024-12-09 17:41:54.019039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.654 [2024-12-09 17:41:54.032294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.654 [2024-12-09 17:41:54.032314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.654 [2024-12-09 17:41:54.047534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.654 [2024-12-09 17:41:54.047553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.654 [2024-12-09 17:41:54.061714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.655 [2024-12-09 17:41:54.061733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.655 [2024-12-09 17:41:54.075848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.655 [2024-12-09 17:41:54.075867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.655 [2024-12-09 17:41:54.089458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.655 [2024-12-09 17:41:54.089478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.655 [2024-12-09 17:41:54.104371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.655 [2024-12-09 17:41:54.104390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.655 [2024-12-09 17:41:54.119105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.655 [2024-12-09 17:41:54.119124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.655 [2024-12-09 17:41:54.131768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.655 [2024-12-09 17:41:54.131788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.655 [2024-12-09 17:41:54.145003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.655 [2024-12-09 17:41:54.145021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.655 [2024-12-09 17:41:54.159249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.655 [2024-12-09 17:41:54.159269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.655 [2024-12-09 17:41:54.171726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.655 [2024-12-09 17:41:54.171745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.655 [2024-12-09 17:41:54.184980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.655 [2024-12-09 17:41:54.184999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.200492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.200511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.215358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.215377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.228129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.228147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.241404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.241424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.256228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.256246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.271467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.271487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.285755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.285773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.300328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.300347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.315387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.315406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.329711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.329730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.344678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.344697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.359696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.359716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.372906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.372926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.387364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.387383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 16845.50 IOPS, 131.61 MiB/s [2024-12-09T16:41:54.454Z] [2024-12-09 17:41:54.399384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.914 [2024-12-09 17:41:54.399412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.914 [2024-12-09 17:41:54.413103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.915 [2024-12-09 17:41:54.413123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.915 [2024-12-09 17:41:54.427851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.915 [2024-12-09 17:41:54.427870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.915 [2024-12-09 17:41:54.440268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.915 [2024-12-09 17:41:54.440287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.915 [2024-12-09 17:41:54.453665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.915 [2024-12-09 17:41:54.453684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.468707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.468726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.483652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.483672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.497268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.497288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.511474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.511493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.525251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.525271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.540010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.540029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.555945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.555964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.569217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.569237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.583520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.583539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.595813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.595833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.609101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.609121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.623617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.623636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.636368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.636397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.651570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.651590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.664069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.664094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.677262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.677282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.691898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.691917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.174 [2024-12-09 17:41:54.704205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.174 [2024-12-09 17:41:54.704223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.433 [2024-12-09 17:41:54.717013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.433 [2024-12-09 17:41:54.717033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.433 [2024-12-09 17:41:54.731768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.433 [2024-12-09 17:41:54.731787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.433 [2024-12-09 17:41:54.742625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.433 [2024-12-09 17:41:54.742644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.433 [2024-12-09 17:41:54.757246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.433 [2024-12-09 17:41:54.757265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.433 [2024-12-09 17:41:54.771526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.433 [2024-12-09 17:41:54.771546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.433 [2024-12-09 17:41:54.784505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.784524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.799316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.799337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.810795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.810815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.825513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.825532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.840222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.840242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.855289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.855309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.868283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.868303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.883627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.883649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.897120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.897141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.912399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.912419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.927933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.927957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.943570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.943590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.956787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.956806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.434 [2024-12-09 17:41:54.968059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.434 [2024-12-09 17:41:54.968079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:54.981569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:54.981588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:54.996065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:54.996085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.007014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.007035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.021671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.021692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.036276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.036297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.050880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.050900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.064203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.064222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.079237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.079258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.093036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.093056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.107698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.107718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.118642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.118661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.134159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.134187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.148847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.148867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.163881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.163901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.179712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.179733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.191379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.693 [2024-12-09 17:41:55.191405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.693 [2024-12-09 17:41:55.205340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.694 [2024-12-09 17:41:55.205358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.694 [2024-12-09 17:41:55.220262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.694 [2024-12-09 17:41:55.220283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.235930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.235950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.251339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.251359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.265619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.265638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.280155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.280182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.295564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.295584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.309362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.309380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.323926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.323945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.337095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.337114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.351820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.351839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.363540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.363559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.376990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.377009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 16835.80 IOPS, 131.53 MiB/s [2024-12-09T16:41:55.492Z] [2024-12-09 17:41:55.391595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.391614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 00:31:28.952 Latency(us) 00:31:28.952 [2024-12-09T16:41:55.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.952 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:28.952 Nvme1n1 : 5.01 16837.95 131.55 0.00 0.00 7594.46 1997.29 13294.45 00:31:28.952 [2024-12-09T16:41:55.492Z] =================================================================================================================== 00:31:28.952 [2024-12-09T16:41:55.492Z] Total : 16837.95 131.55 0.00 0.00 7594.46 1997.29 13294.45 00:31:28.952 [2024-12-09 17:41:55.403539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.403557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.415538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.415553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.427548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.427566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.439540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.952 [2024-12-09 17:41:55.439557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.952 [2024-12-09 17:41:55.451541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.953 [2024-12-09 17:41:55.451556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.953 [2024-12-09 17:41:55.463537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.953 [2024-12-09 17:41:55.463552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.953 [2024-12-09 17:41:55.475536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.953 [2024-12-09 17:41:55.475550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.953 [2024-12-09 17:41:55.487539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.953 [2024-12-09 17:41:55.487553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.212 [2024-12-09 17:41:55.499538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.212 [2024-12-09 17:41:55.499552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.212 [2024-12-09 17:41:55.511535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.212 [2024-12-09 17:41:55.511545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.212 [2024-12-09 17:41:55.523539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.212 [2024-12-09 17:41:55.523552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.212 [2024-12-09 17:41:55.535536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.212 [2024-12-09 17:41:55.535550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.212 [2024-12-09 17:41:55.547538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.212 [2024-12-09 17:41:55.547551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2118489) - No such process 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2118489 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:29.212 delay0 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.212 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:29.212 [2024-12-09 17:41:55.693817] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:35.782 Initializing NVMe Controllers 00:31:35.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:35.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:35.782 Initialization complete. Launching workers. 00:31:35.782 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 151 00:31:35.782 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 438, failed to submit 33 00:31:35.782 success 294, unsuccessful 144, failed 0 00:31:35.782 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:35.782 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:35.782 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:35.782 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:35.782 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:35.782 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:35.782 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:35.782 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:35.782 rmmod nvme_tcp 00:31:35.782 rmmod nvme_fabrics 00:31:35.782 rmmod nvme_keyring 00:31:35.782 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2116694 ']' 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2116694 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2116694 ']' 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2116694 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2116694 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2116694' 00:31:35.782 killing process with pid 2116694 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2116694 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2116694 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.782 17:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.324 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:38.324 00:31:38.324 real 0m31.502s 00:31:38.324 user 0m41.030s 00:31:38.324 sys 0m12.134s 00:31:38.324 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:38.324 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:38.324 ************************************ 00:31:38.324 END TEST nvmf_zcopy 00:31:38.324 ************************************ 00:31:38.324 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:38.324 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:38.324 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:38.324 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:38.324 ************************************ 00:31:38.324 START TEST nvmf_nmic 00:31:38.324 ************************************ 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:38.325 * Looking for test storage... 00:31:38.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:38.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.325 --rc genhtml_branch_coverage=1 00:31:38.325 --rc genhtml_function_coverage=1 00:31:38.325 --rc genhtml_legend=1 00:31:38.325 --rc geninfo_all_blocks=1 00:31:38.325 --rc geninfo_unexecuted_blocks=1 00:31:38.325 00:31:38.325 ' 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:38.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.325 --rc genhtml_branch_coverage=1 00:31:38.325 --rc genhtml_function_coverage=1 00:31:38.325 --rc genhtml_legend=1 00:31:38.325 --rc geninfo_all_blocks=1 00:31:38.325 --rc geninfo_unexecuted_blocks=1 00:31:38.325 00:31:38.325 ' 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:38.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.325 --rc genhtml_branch_coverage=1 00:31:38.325 --rc genhtml_function_coverage=1 00:31:38.325 --rc genhtml_legend=1 00:31:38.325 --rc geninfo_all_blocks=1 00:31:38.325 --rc geninfo_unexecuted_blocks=1 00:31:38.325 00:31:38.325 ' 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:38.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.325 --rc genhtml_branch_coverage=1 00:31:38.325 --rc genhtml_function_coverage=1 00:31:38.325 --rc genhtml_legend=1 00:31:38.325 --rc geninfo_all_blocks=1 00:31:38.325 --rc geninfo_unexecuted_blocks=1 00:31:38.325 00:31:38.325 ' 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.325 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:38.326 17:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:44.903 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:44.903 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:44.903 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:44.904 Found net devices under 0000:af:00.0: cvl_0_0 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:44.904 Found net devices under 0000:af:00.1: cvl_0_1 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:44.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:31:44.904 00:31:44.904 --- 10.0.0.2 ping statistics --- 00:31:44.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.904 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:31:44.904 00:31:44.904 --- 10.0.0.1 ping statistics --- 00:31:44.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.904 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2124296 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2124296 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2124296 ']' 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:44.904 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.904 [2024-12-09 17:42:10.624844] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:44.904 [2024-12-09 17:42:10.625761] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:31:44.904 [2024-12-09 17:42:10.625794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.904 [2024-12-09 17:42:10.702928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:44.904 [2024-12-09 17:42:10.746483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.904 [2024-12-09 17:42:10.746522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.904 [2024-12-09 17:42:10.746530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.904 [2024-12-09 17:42:10.746536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.904 [2024-12-09 17:42:10.746541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.904 [2024-12-09 17:42:10.747867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.904 [2024-12-09 17:42:10.747978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.904 [2024-12-09 17:42:10.748084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.904 [2024-12-09 17:42:10.748085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:44.904 [2024-12-09 17:42:10.817195] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:44.904 [2024-12-09 17:42:10.817559] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:44.904 [2024-12-09 17:42:10.818043] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:44.905 [2024-12-09 17:42:10.818259] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:44.905 [2024-12-09 17:42:10.818306] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.905 [2024-12-09 17:42:10.896728] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.905 Malloc0 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.905 [2024-12-09 17:42:10.972794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:44.905 test case1: single bdev can't be used in multiple subsystems 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.905 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.905 [2024-12-09 17:42:10.996470] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:44.905 [2024-12-09 17:42:10.996490] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:44.905 [2024-12-09 17:42:10.996498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.905 request: 00:31:44.905 { 00:31:44.905 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:44.905 "namespace": { 00:31:44.905 "bdev_name": "Malloc0", 00:31:44.905 "no_auto_visible": false, 00:31:44.905 "hide_metadata": false 00:31:44.905 }, 00:31:44.905 "method": "nvmf_subsystem_add_ns", 00:31:44.905 "req_id": 1 00:31:44.905 } 00:31:44.905 Got JSON-RPC error response 00:31:44.905 response: 00:31:44.905 { 00:31:44.905 "code": -32602, 00:31:44.905 "message": "Invalid parameters" 00:31:44.905 } 00:31:44.905 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:44.905 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:44.905 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:44.905 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:44.905 Adding namespace failed - expected result. 00:31:44.905 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:44.905 test case2: host connect to nvmf target in multiple paths 00:31:44.905 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:44.905 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.905 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:44.905 [2024-12-09 17:42:11.008562] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:44.905 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.905 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:44.905 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:45.164 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:45.164 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:45.164 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:45.164 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:45.164 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:47.068 17:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:47.068 17:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:47.068 17:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:47.068 17:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:47.068 17:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:47.068 17:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:47.068 17:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:47.068 [global] 00:31:47.068 thread=1 00:31:47.068 invalidate=1 00:31:47.068 rw=write 00:31:47.068 time_based=1 00:31:47.068 runtime=1 00:31:47.068 ioengine=libaio 00:31:47.068 direct=1 00:31:47.068 bs=4096 00:31:47.068 iodepth=1 00:31:47.068 norandommap=0 00:31:47.068 numjobs=1 00:31:47.068 00:31:47.068 verify_dump=1 00:31:47.068 verify_backlog=512 00:31:47.068 verify_state_save=0 00:31:47.068 do_verify=1 00:31:47.068 verify=crc32c-intel 00:31:47.068 [job0] 00:31:47.068 filename=/dev/nvme0n1 00:31:47.068 Could not set queue depth (nvme0n1) 00:31:47.326 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.326 fio-3.35 00:31:47.326 Starting 1 thread 00:31:48.712 00:31:48.712 job0: (groupid=0, jobs=1): err= 0: pid=2125070: Mon Dec 9 17:42:15 2024 00:31:48.712 read: IOPS=21, BW=84.5KiB/s (86.6kB/s)(88.0KiB/1041msec) 00:31:48.712 slat (nsec): min=9304, max=25341, avg=22646.50, stdev=3114.91 00:31:48.712 clat (usec): min=40839, max=41082, avg=40963.40, stdev=63.34 00:31:48.712 lat (usec): min=40864, max=41104, avg=40986.04, stdev=63.09 00:31:48.712 clat percentiles (usec): 00:31:48.712 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:48.712 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:48.712 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:48.712 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:48.712 | 99.99th=[41157] 00:31:48.712 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:31:48.712 slat (usec): min=10, max=27044, avg=64.70, stdev=1194.69 00:31:48.712 clat (usec): min=128, max=339, avg=201.55, stdev=50.64 00:31:48.712 lat (usec): min=139, max=27337, avg=266.26, stdev=1199.82 00:31:48.712 clat percentiles (usec): 00:31:48.712 | 1.00th=[ 131], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:31:48.712 | 30.00th=[ 141], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 239], 00:31:48.712 | 70.00th=[ 241], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 245], 00:31:48.712 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 338], 99.95th=[ 338], 00:31:48.712 | 99.99th=[ 338] 00:31:48.712 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:48.712 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:48.712 lat (usec) : 250=94.57%, 500=1.31% 00:31:48.712 lat (msec) : 50=4.12% 00:31:48.712 cpu : usr=0.38%, sys=0.96%, ctx=536, majf=0, minf=1 00:31:48.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.712 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.712 00:31:48.712 Run status group 0 (all jobs): 00:31:48.712 READ: bw=84.5KiB/s (86.6kB/s), 84.5KiB/s-84.5KiB/s (86.6kB/s-86.6kB/s), io=88.0KiB (90.1kB), run=1041-1041msec 00:31:48.712 WRITE: bw=1967KiB/s (2015kB/s), 1967KiB/s-1967KiB/s (2015kB/s-2015kB/s), io=2048KiB (2097kB), run=1041-1041msec 00:31:48.712 00:31:48.712 Disk stats (read/write): 00:31:48.712 nvme0n1: ios=43/512, merge=0/0, ticks=1696/97, in_queue=1793, util=98.40% 00:31:48.712 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:48.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:48.712 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:48.712 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:48.712 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:48.712 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:48.974 rmmod nvme_tcp 00:31:48.974 rmmod nvme_fabrics 00:31:48.974 rmmod nvme_keyring 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2124296 ']' 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2124296 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2124296 ']' 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2124296 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2124296 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2124296' 00:31:48.974 killing process with pid 2124296 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2124296 00:31:48.974 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2124296 00:31:49.233 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:49.233 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:49.233 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:49.233 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:49.233 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:49.233 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:49.233 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:49.233 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:49.233 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:49.233 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.233 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.233 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.136 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:51.136 00:31:51.136 real 0m13.263s 00:31:51.136 user 0m24.612s 00:31:51.136 sys 0m6.110s 00:31:51.136 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:51.136 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.136 ************************************ 00:31:51.136 END TEST nvmf_nmic 00:31:51.136 ************************************ 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:51.396 ************************************ 00:31:51.396 START TEST nvmf_fio_target 00:31:51.396 ************************************ 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:51.396 * Looking for test storage... 00:31:51.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:51.396 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:51.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.397 --rc genhtml_branch_coverage=1 00:31:51.397 --rc genhtml_function_coverage=1 00:31:51.397 --rc genhtml_legend=1 00:31:51.397 --rc geninfo_all_blocks=1 00:31:51.397 --rc geninfo_unexecuted_blocks=1 00:31:51.397 00:31:51.397 ' 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:51.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.397 --rc genhtml_branch_coverage=1 00:31:51.397 --rc genhtml_function_coverage=1 00:31:51.397 --rc genhtml_legend=1 00:31:51.397 --rc geninfo_all_blocks=1 00:31:51.397 --rc geninfo_unexecuted_blocks=1 00:31:51.397 00:31:51.397 ' 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:51.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.397 --rc genhtml_branch_coverage=1 00:31:51.397 --rc genhtml_function_coverage=1 00:31:51.397 --rc genhtml_legend=1 00:31:51.397 --rc geninfo_all_blocks=1 00:31:51.397 --rc geninfo_unexecuted_blocks=1 00:31:51.397 00:31:51.397 ' 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:51.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.397 --rc genhtml_branch_coverage=1 00:31:51.397 --rc genhtml_function_coverage=1 00:31:51.397 --rc genhtml_legend=1 00:31:51.397 --rc geninfo_all_blocks=1 00:31:51.397 --rc geninfo_unexecuted_blocks=1 00:31:51.397 00:31:51.397 ' 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:51.397 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:51.398 17:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.966 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:57.967 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:57.967 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:57.967 Found net devices under 0000:af:00.0: cvl_0_0 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:57.967 Found net devices under 0000:af:00.1: cvl_0_1 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:57.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:31:57.967 00:31:57.967 --- 10.0.0.2 ping statistics --- 00:31:57.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.967 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:31:57.967 00:31:57.967 --- 10.0.0.1 ping statistics --- 00:31:57.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.967 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:57.967 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:57.968 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.968 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2128734 00:31:57.968 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2128734 00:31:57.968 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:57.968 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2128734 ']' 00:31:57.968 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.968 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:57.968 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.968 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:57.968 17:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.968 [2024-12-09 17:42:23.788198] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:57.968 [2024-12-09 17:42:23.789133] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:31:57.968 [2024-12-09 17:42:23.789180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:57.968 [2024-12-09 17:42:23.870818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:57.968 [2024-12-09 17:42:23.911282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.968 [2024-12-09 17:42:23.911320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.968 [2024-12-09 17:42:23.911326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:57.968 [2024-12-09 17:42:23.911332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:57.968 [2024-12-09 17:42:23.911336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.968 [2024-12-09 17:42:23.912650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.968 [2024-12-09 17:42:23.912759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:57.968 [2024-12-09 17:42:23.912865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.968 [2024-12-09 17:42:23.912866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:57.968 [2024-12-09 17:42:23.980858] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:57.968 [2024-12-09 17:42:23.981507] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:57.968 [2024-12-09 17:42:23.981722] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:57.968 [2024-12-09 17:42:23.981939] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:57.968 [2024-12-09 17:42:23.981980] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:57.968 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.968 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:57.968 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:57.968 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:57.968 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.968 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:57.968 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:57.968 [2024-12-09 17:42:24.221665] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:57.968 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:57.968 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:57.968 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:58.227 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:58.227 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:58.486 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:58.486 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:58.746 17:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:58.746 17:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:59.005 17:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:59.005 17:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:59.005 17:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:59.263 17:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:59.263 17:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:59.522 17:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:59.522 17:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:59.781 17:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:59.781 17:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:59.781 17:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:00.040 17:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:00.040 17:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:00.299 17:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.557 [2024-12-09 17:42:26.873547] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.557 17:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:00.815 17:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:00.815 17:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:01.074 17:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:01.074 17:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:01.074 17:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:01.074 17:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:01.074 17:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:01.074 17:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:03.608 17:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:03.608 17:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:03.608 17:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:03.608 17:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:03.608 17:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:03.608 17:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:03.608 17:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:03.608 [global] 00:32:03.608 thread=1 00:32:03.608 invalidate=1 00:32:03.608 rw=write 00:32:03.608 time_based=1 00:32:03.608 runtime=1 00:32:03.608 ioengine=libaio 00:32:03.608 direct=1 00:32:03.608 bs=4096 00:32:03.608 iodepth=1 00:32:03.608 norandommap=0 00:32:03.608 numjobs=1 00:32:03.608 00:32:03.608 verify_dump=1 00:32:03.608 verify_backlog=512 00:32:03.608 verify_state_save=0 00:32:03.608 do_verify=1 00:32:03.609 verify=crc32c-intel 00:32:03.609 [job0] 00:32:03.609 filename=/dev/nvme0n1 00:32:03.609 [job1] 00:32:03.609 filename=/dev/nvme0n2 00:32:03.609 [job2] 00:32:03.609 filename=/dev/nvme0n3 00:32:03.609 [job3] 00:32:03.609 filename=/dev/nvme0n4 00:32:03.609 Could not set queue depth (nvme0n1) 00:32:03.609 Could not set queue depth (nvme0n2) 00:32:03.609 Could not set queue depth (nvme0n3) 00:32:03.609 Could not set queue depth (nvme0n4) 00:32:03.609 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:03.609 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:03.609 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:03.609 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:03.609 fio-3.35 00:32:03.609 Starting 4 threads 00:32:05.001 00:32:05.001 job0: (groupid=0, jobs=1): err= 0: pid=2129869: Mon Dec 9 17:42:31 2024 00:32:05.001 read: IOPS=26, BW=108KiB/s (110kB/s)(108KiB/1001msec) 00:32:05.001 slat (nsec): min=8972, max=26299, avg=20217.44, stdev=5393.91 00:32:05.001 clat (usec): min=275, max=41542, avg=33425.85, stdev=16061.15 00:32:05.001 lat (usec): min=298, max=41551, avg=33446.06, stdev=16059.87 00:32:05.001 clat percentiles (usec): 00:32:05.001 | 1.00th=[ 277], 5.00th=[ 351], 10.00th=[ 351], 20.00th=[40633], 00:32:05.001 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:32:05.001 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:05.001 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:05.001 | 99.99th=[41681] 00:32:05.001 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:32:05.001 slat (nsec): min=8287, max=45229, avg=10951.39, stdev=2305.61 00:32:05.001 clat (usec): min=138, max=429, avg=176.42, stdev=18.71 00:32:05.001 lat (usec): min=148, max=474, avg=187.37, stdev=19.66 00:32:05.001 clat percentiles (usec): 00:32:05.001 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:32:05.001 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 180], 00:32:05.001 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:32:05.001 | 99.00th=[ 215], 99.50th=[ 225], 99.90th=[ 429], 99.95th=[ 429], 00:32:05.001 | 99.99th=[ 429] 00:32:05.001 bw ( KiB/s): min= 4096, max= 4096, per=15.00%, avg=4096.00, stdev= 0.00, samples=1 00:32:05.001 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:05.001 lat (usec) : 250=94.62%, 500=1.30% 00:32:05.001 lat (msec) : 50=4.08% 00:32:05.001 cpu : usr=0.60%, sys=0.70%, ctx=539, majf=0, minf=1 00:32:05.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:05.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.002 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:05.002 job1: (groupid=0, jobs=1): err= 0: pid=2129870: Mon Dec 9 17:42:31 2024 00:32:05.002 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:05.002 slat (nsec): min=6718, max=37907, avg=8132.37, stdev=1809.46 00:32:05.002 clat (usec): min=183, max=532, avg=278.71, stdev=74.46 00:32:05.002 lat (usec): min=194, max=539, avg=286.84, stdev=74.52 00:32:05.002 clat percentiles (usec): 00:32:05.002 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 227], 00:32:05.002 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 265], 00:32:05.002 | 70.00th=[ 285], 80.00th=[ 318], 90.00th=[ 404], 95.00th=[ 469], 00:32:05.002 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 523], 99.95th=[ 529], 00:32:05.002 | 99.99th=[ 537] 00:32:05.002 write: IOPS=2256, BW=9027KiB/s (9244kB/s)(9036KiB/1001msec); 0 zone resets 00:32:05.002 slat (nsec): min=9552, max=60548, avg=11261.45, stdev=2170.60 00:32:05.002 clat (usec): min=120, max=438, avg=165.66, stdev=18.79 00:32:05.002 lat (usec): min=130, max=475, avg=176.92, stdev=19.26 00:32:05.002 clat percentiles (usec): 00:32:05.002 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 145], 20.00th=[ 155], 00:32:05.002 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:32:05.002 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 196], 00:32:05.002 | 99.00th=[ 223], 99.50th=[ 235], 99.90th=[ 260], 99.95th=[ 262], 00:32:05.002 | 99.99th=[ 437] 00:32:05.002 bw ( KiB/s): min= 8824, max= 8824, per=32.32%, avg=8824.00, stdev= 0.00, samples=1 00:32:05.002 iops : min= 2206, max= 2206, avg=2206.00, stdev= 0.00, samples=1 00:32:05.002 lat (usec) : 250=75.30%, 500=24.05%, 750=0.65% 00:32:05.002 cpu : usr=3.80%, sys=6.50%, ctx=4307, majf=0, minf=2 00:32:05.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:05.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.002 issued rwts: total=2048,2259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:05.002 job2: (groupid=0, jobs=1): err= 0: pid=2129871: Mon Dec 9 17:42:31 2024 00:32:05.002 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:05.002 slat (nsec): min=6596, max=26033, avg=8359.21, stdev=1052.04 00:32:05.002 clat (usec): min=193, max=519, avg=274.09, stdev=68.13 00:32:05.002 lat (usec): min=201, max=527, avg=282.45, stdev=68.21 00:32:05.002 clat percentiles (usec): 00:32:05.002 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 227], 00:32:05.002 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 265], 00:32:05.002 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 343], 95.00th=[ 461], 00:32:05.002 | 99.00th=[ 502], 99.50th=[ 506], 99.90th=[ 515], 99.95th=[ 519], 00:32:05.002 | 99.99th=[ 519] 00:32:05.002 write: IOPS=2059, BW=8240KiB/s (8438kB/s)(8248KiB/1001msec); 0 zone resets 00:32:05.002 slat (nsec): min=9258, max=43449, avg=11608.26, stdev=1970.29 00:32:05.002 clat (usec): min=132, max=416, avg=186.26, stdev=36.47 00:32:05.002 lat (usec): min=143, max=457, avg=197.87, stdev=36.13 00:32:05.002 clat percentiles (usec): 00:32:05.002 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:32:05.002 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:32:05.002 | 70.00th=[ 186], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 247], 00:32:05.002 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 285], 99.95th=[ 310], 00:32:05.002 | 99.99th=[ 416] 00:32:05.002 bw ( KiB/s): min= 8192, max= 8192, per=30.00%, avg=8192.00, stdev= 0.00, samples=1 00:32:05.002 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:05.002 lat (usec) : 250=71.82%, 500=27.45%, 750=0.73% 00:32:05.002 cpu : usr=4.10%, sys=5.60%, ctx=4110, majf=0, minf=1 00:32:05.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:05.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.002 issued rwts: total=2048,2062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:05.002 job3: (groupid=0, jobs=1): err= 0: pid=2129872: Mon Dec 9 17:42:31 2024 00:32:05.002 read: IOPS=1961, BW=7845KiB/s (8034kB/s)(7908KiB/1008msec) 00:32:05.002 slat (nsec): min=6825, max=27126, avg=8184.33, stdev=1459.70 00:32:05.002 clat (usec): min=200, max=40922, avg=293.86, stdev=1287.99 00:32:05.002 lat (usec): min=207, max=40946, avg=302.04, stdev=1288.29 00:32:05.002 clat percentiles (usec): 00:32:05.002 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:32:05.002 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:32:05.002 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:32:05.002 | 99.00th=[ 469], 99.50th=[ 498], 99.90th=[40633], 99.95th=[41157], 00:32:05.002 | 99.99th=[41157] 00:32:05.002 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:32:05.002 slat (nsec): min=5629, max=42658, avg=11137.61, stdev=1935.05 00:32:05.002 clat (usec): min=138, max=417, avg=183.51, stdev=22.79 00:32:05.002 lat (usec): min=149, max=460, avg=194.65, stdev=22.91 00:32:05.002 clat percentiles (usec): 00:32:05.002 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:32:05.002 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:32:05.002 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 206], 95.00th=[ 241], 00:32:05.002 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 314], 99.95th=[ 371], 00:32:05.002 | 99.99th=[ 416] 00:32:05.002 bw ( KiB/s): min= 7352, max= 9032, per=30.00%, avg=8192.00, stdev=1187.94, samples=2 00:32:05.002 iops : min= 1838, max= 2258, avg=2048.00, stdev=296.98, samples=2 00:32:05.002 lat (usec) : 250=80.87%, 500=18.96%, 750=0.10% 00:32:05.002 lat (msec) : 10=0.02%, 50=0.05% 00:32:05.002 cpu : usr=3.08%, sys=6.45%, ctx=4025, majf=0, minf=1 00:32:05.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:05.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.002 issued rwts: total=1977,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:05.002 00:32:05.002 Run status group 0 (all jobs): 00:32:05.002 READ: bw=23.6MiB/s (24.8MB/s), 108KiB/s-8184KiB/s (110kB/s-8380kB/s), io=23.8MiB (25.0MB), run=1001-1008msec 00:32:05.002 WRITE: bw=26.7MiB/s (28.0MB/s), 2046KiB/s-9027KiB/s (2095kB/s-9244kB/s), io=26.9MiB (28.2MB), run=1001-1008msec 00:32:05.002 00:32:05.002 Disk stats (read/write): 00:32:05.002 nvme0n1: ios=70/512, merge=0/0, ticks=719/84, in_queue=803, util=82.06% 00:32:05.002 nvme0n2: ios=1536/2045, merge=0/0, ticks=382/323, in_queue=705, util=82.94% 00:32:05.002 nvme0n3: ios=1536/1960, merge=0/0, ticks=383/344, in_queue=727, util=87.53% 00:32:05.002 nvme0n4: ios=1536/2029, merge=0/0, ticks=368/357, in_queue=725, util=89.17% 00:32:05.002 17:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:05.002 [global] 00:32:05.002 thread=1 00:32:05.002 invalidate=1 00:32:05.002 rw=randwrite 00:32:05.002 time_based=1 00:32:05.002 runtime=1 00:32:05.002 ioengine=libaio 00:32:05.002 direct=1 00:32:05.002 bs=4096 00:32:05.002 iodepth=1 00:32:05.002 norandommap=0 00:32:05.002 numjobs=1 00:32:05.002 00:32:05.002 verify_dump=1 00:32:05.002 verify_backlog=512 00:32:05.002 verify_state_save=0 00:32:05.002 do_verify=1 00:32:05.002 verify=crc32c-intel 00:32:05.002 [job0] 00:32:05.002 filename=/dev/nvme0n1 00:32:05.002 [job1] 00:32:05.002 filename=/dev/nvme0n2 00:32:05.002 [job2] 00:32:05.002 filename=/dev/nvme0n3 00:32:05.002 [job3] 00:32:05.002 filename=/dev/nvme0n4 00:32:05.002 Could not set queue depth (nvme0n1) 00:32:05.002 Could not set queue depth (nvme0n2) 00:32:05.002 Could not set queue depth (nvme0n3) 00:32:05.002 Could not set queue depth (nvme0n4) 00:32:05.260 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:05.260 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:05.260 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:05.260 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:05.260 fio-3.35 00:32:05.260 Starting 4 threads 00:32:06.634 00:32:06.634 job0: (groupid=0, jobs=1): err= 0: pid=2130237: Mon Dec 9 17:42:32 2024 00:32:06.634 read: IOPS=664, BW=2656KiB/s (2720kB/s)(2664KiB/1003msec) 00:32:06.634 slat (nsec): min=3746, max=19858, avg=7315.09, stdev=1368.74 00:32:06.634 clat (usec): min=195, max=41277, avg=1207.81, stdev=6227.35 00:32:06.634 lat (usec): min=202, max=41289, avg=1215.13, stdev=6227.70 00:32:06.634 clat percentiles (usec): 00:32:06.634 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:32:06.634 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 229], 00:32:06.634 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 260], 95.00th=[ 302], 00:32:06.634 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:06.634 | 99.99th=[41157] 00:32:06.634 write: IOPS=1020, BW=4084KiB/s (4182kB/s)(4096KiB/1003msec); 0 zone resets 00:32:06.634 slat (nsec): min=9064, max=44488, avg=10099.37, stdev=1508.31 00:32:06.634 clat (usec): min=130, max=376, avg=174.93, stdev=20.46 00:32:06.634 lat (usec): min=139, max=421, avg=185.03, stdev=20.79 00:32:06.634 clat percentiles (usec): 00:32:06.634 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:32:06.634 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 180], 60.00th=[ 186], 00:32:06.634 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 202], 00:32:06.634 | 99.00th=[ 219], 99.50th=[ 223], 99.90th=[ 231], 99.95th=[ 375], 00:32:06.634 | 99.99th=[ 375] 00:32:06.634 bw ( KiB/s): min= 8192, max= 8192, per=34.63%, avg=8192.00, stdev= 0.00, samples=1 00:32:06.634 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:06.634 lat (usec) : 250=95.03%, 500=3.96%, 750=0.06% 00:32:06.634 lat (msec) : 50=0.95% 00:32:06.634 cpu : usr=0.80%, sys=1.50%, ctx=1693, majf=0, minf=1 00:32:06.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.634 issued rwts: total=666,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.634 job1: (groupid=0, jobs=1): err= 0: pid=2130238: Mon Dec 9 17:42:32 2024 00:32:06.634 read: IOPS=465, BW=1863KiB/s (1908kB/s)(1936KiB/1039msec) 00:32:06.634 slat (nsec): min=3904, max=25773, avg=7637.85, stdev=3387.36 00:32:06.634 clat (usec): min=198, max=42392, avg=1926.22, stdev=8158.56 00:32:06.634 lat (usec): min=203, max=42402, avg=1933.86, stdev=8161.56 00:32:06.634 clat percentiles (usec): 00:32:06.634 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:32:06.634 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 235], 60.00th=[ 237], 00:32:06.634 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 251], 95.00th=[ 285], 00:32:06.634 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:06.634 | 99.99th=[42206] 00:32:06.634 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:32:06.634 slat (nsec): min=10210, max=36634, avg=11795.57, stdev=1885.16 00:32:06.634 clat (usec): min=146, max=368, avg=182.39, stdev=20.07 00:32:06.634 lat (usec): min=158, max=405, avg=194.18, stdev=20.48 00:32:06.634 clat percentiles (usec): 00:32:06.634 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:32:06.634 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 186], 00:32:06.634 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 221], 00:32:06.634 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 371], 99.95th=[ 371], 00:32:06.634 | 99.99th=[ 371] 00:32:06.634 bw ( KiB/s): min= 4096, max= 4096, per=17.32%, avg=4096.00, stdev= 0.00, samples=1 00:32:06.634 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:06.634 lat (usec) : 250=94.78%, 500=3.21% 00:32:06.634 lat (msec) : 50=2.01% 00:32:06.634 cpu : usr=0.39%, sys=1.54%, ctx=997, majf=0, minf=1 00:32:06.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.634 issued rwts: total=484,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.634 job2: (groupid=0, jobs=1): err= 0: pid=2130239: Mon Dec 9 17:42:32 2024 00:32:06.634 read: IOPS=2260, BW=9043KiB/s (9260kB/s)(9052KiB/1001msec) 00:32:06.634 slat (nsec): min=6784, max=28207, avg=7733.24, stdev=1251.59 00:32:06.634 clat (usec): min=181, max=527, avg=242.50, stdev=31.07 00:32:06.634 lat (usec): min=189, max=534, avg=250.24, stdev=31.11 00:32:06.634 clat percentiles (usec): 00:32:06.634 | 1.00th=[ 192], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 219], 00:32:06.634 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 247], 00:32:06.634 | 70.00th=[ 253], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 285], 00:32:06.634 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 445], 99.95th=[ 502], 00:32:06.634 | 99.99th=[ 529] 00:32:06.634 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:06.634 slat (nsec): min=9340, max=36489, avg=10374.25, stdev=1104.92 00:32:06.634 clat (usec): min=125, max=433, avg=154.85, stdev=24.07 00:32:06.634 lat (usec): min=135, max=444, avg=165.23, stdev=24.23 00:32:06.634 clat percentiles (usec): 00:32:06.634 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 133], 20.00th=[ 137], 00:32:06.634 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 153], 00:32:06.634 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 186], 95.00th=[ 200], 00:32:06.634 | 99.00th=[ 223], 99.50th=[ 245], 99.90th=[ 347], 99.95th=[ 388], 00:32:06.634 | 99.99th=[ 433] 00:32:06.634 bw ( KiB/s): min=10768, max=10768, per=45.52%, avg=10768.00, stdev= 0.00, samples=1 00:32:06.634 iops : min= 2692, max= 2692, avg=2692.00, stdev= 0.00, samples=1 00:32:06.634 lat (usec) : 250=83.50%, 500=16.46%, 750=0.04% 00:32:06.634 cpu : usr=2.00%, sys=4.90%, ctx=4824, majf=0, minf=1 00:32:06.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.634 issued rwts: total=2263,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.634 job3: (groupid=0, jobs=1): err= 0: pid=2130240: Mon Dec 9 17:42:32 2024 00:32:06.634 read: IOPS=1589, BW=6358KiB/s (6510kB/s)(6364KiB/1001msec) 00:32:06.634 slat (nsec): min=2987, max=28956, avg=7485.81, stdev=1443.77 00:32:06.634 clat (usec): min=174, max=42090, avg=399.65, stdev=2531.89 00:32:06.634 lat (usec): min=181, max=42101, avg=407.14, stdev=2532.58 00:32:06.634 clat percentiles (usec): 00:32:06.634 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:32:06.634 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 247], 00:32:06.634 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 306], 00:32:06.634 | 99.00th=[ 343], 99.50th=[ 429], 99.90th=[42206], 99.95th=[42206], 00:32:06.634 | 99.99th=[42206] 00:32:06.634 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:06.634 slat (nsec): min=8905, max=39908, avg=10043.35, stdev=1018.52 00:32:06.634 clat (usec): min=125, max=372, avg=158.13, stdev=34.51 00:32:06.634 lat (usec): min=134, max=412, avg=168.17, stdev=34.65 00:32:06.634 clat percentiles (usec): 00:32:06.634 | 1.00th=[ 128], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 133], 00:32:06.634 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:32:06.634 | 70.00th=[ 163], 80.00th=[ 186], 90.00th=[ 206], 95.00th=[ 219], 00:32:06.634 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 355], 99.95th=[ 359], 00:32:06.634 | 99.99th=[ 371] 00:32:06.634 bw ( KiB/s): min= 4304, max= 4304, per=18.20%, avg=4304.00, stdev= 0.00, samples=1 00:32:06.634 iops : min= 1076, max= 1076, avg=1076.00, stdev= 0.00, samples=1 00:32:06.634 lat (usec) : 250=82.28%, 500=17.53% 00:32:06.634 lat (msec) : 4=0.03%, 50=0.16% 00:32:06.634 cpu : usr=1.20%, sys=4.00%, ctx=3639, majf=0, minf=2 00:32:06.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.634 issued rwts: total=1591,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.634 00:32:06.634 Run status group 0 (all jobs): 00:32:06.634 READ: bw=18.8MiB/s (19.7MB/s), 1863KiB/s-9043KiB/s (1908kB/s-9260kB/s), io=19.5MiB (20.5MB), run=1001-1039msec 00:32:06.634 WRITE: bw=23.1MiB/s (24.2MB/s), 1971KiB/s-9.99MiB/s (2018kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1039msec 00:32:06.634 00:32:06.634 Disk stats (read/write): 00:32:06.634 nvme0n1: ios=708/1024, merge=0/0, ticks=1139/172, in_queue=1311, util=98.10% 00:32:06.634 nvme0n2: ios=516/512, merge=0/0, ticks=1236/89, in_queue=1325, util=96.54% 00:32:06.634 nvme0n3: ios=2058/2048, merge=0/0, ticks=985/316, in_queue=1301, util=97.50% 00:32:06.635 nvme0n4: ios=1367/1536, merge=0/0, ticks=571/240, in_queue=811, util=89.62% 00:32:06.635 17:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:06.635 [global] 00:32:06.635 thread=1 00:32:06.635 invalidate=1 00:32:06.635 rw=write 00:32:06.635 time_based=1 00:32:06.635 runtime=1 00:32:06.635 ioengine=libaio 00:32:06.635 direct=1 00:32:06.635 bs=4096 00:32:06.635 iodepth=128 00:32:06.635 norandommap=0 00:32:06.635 numjobs=1 00:32:06.635 00:32:06.635 verify_dump=1 00:32:06.635 verify_backlog=512 00:32:06.635 verify_state_save=0 00:32:06.635 do_verify=1 00:32:06.635 verify=crc32c-intel 00:32:06.635 [job0] 00:32:06.635 filename=/dev/nvme0n1 00:32:06.635 [job1] 00:32:06.635 filename=/dev/nvme0n2 00:32:06.635 [job2] 00:32:06.635 filename=/dev/nvme0n3 00:32:06.635 [job3] 00:32:06.635 filename=/dev/nvme0n4 00:32:06.635 Could not set queue depth (nvme0n1) 00:32:06.635 Could not set queue depth (nvme0n2) 00:32:06.635 Could not set queue depth (nvme0n3) 00:32:06.635 Could not set queue depth (nvme0n4) 00:32:06.635 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:06.635 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:06.635 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:06.635 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:06.635 fio-3.35 00:32:06.635 Starting 4 threads 00:32:08.010 00:32:08.010 job0: (groupid=0, jobs=1): err= 0: pid=2130603: Mon Dec 9 17:42:34 2024 00:32:08.010 read: IOPS=2535, BW=9.91MiB/s (10.4MB/s)(10.1MiB/1015msec) 00:32:08.010 slat (nsec): min=1364, max=29525k, avg=144376.83, stdev=1222012.28 00:32:08.010 clat (usec): min=9164, max=58747, avg=17461.94, stdev=6824.03 00:32:08.010 lat (usec): min=9168, max=58773, avg=17606.32, stdev=6958.24 00:32:08.010 clat percentiles (usec): 00:32:08.010 | 1.00th=[ 9241], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:32:08.010 | 30.00th=[13173], 40.00th=[15139], 50.00th=[16450], 60.00th=[17433], 00:32:08.010 | 70.00th=[20579], 80.00th=[22938], 90.00th=[29230], 95.00th=[30802], 00:32:08.010 | 99.00th=[32637], 99.50th=[32900], 99.90th=[40633], 99.95th=[51643], 00:32:08.010 | 99.99th=[58983] 00:32:08.010 write: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec); 0 zone resets 00:32:08.010 slat (usec): min=2, max=21243, avg=199.49, stdev=1061.65 00:32:08.010 clat (usec): min=7793, max=64851, avg=26204.88, stdev=12642.86 00:32:08.010 lat (usec): min=7804, max=64861, avg=26404.37, stdev=12709.08 00:32:08.010 clat percentiles (usec): 00:32:08.010 | 1.00th=[ 8029], 5.00th=[10814], 10.00th=[12518], 20.00th=[15664], 00:32:08.010 | 30.00th=[16909], 40.00th=[19006], 50.00th=[24511], 60.00th=[25822], 00:32:08.010 | 70.00th=[33162], 80.00th=[36963], 90.00th=[44827], 95.00th=[49546], 00:32:08.010 | 99.00th=[60556], 99.50th=[63701], 99.90th=[64750], 99.95th=[64750], 00:32:08.010 | 99.99th=[64750] 00:32:08.010 bw ( KiB/s): min=11584, max=12080, per=18.88%, avg=11832.00, stdev=350.72, samples=2 00:32:08.010 iops : min= 2896, max= 3020, avg=2958.00, stdev=87.68, samples=2 00:32:08.010 lat (msec) : 10=12.15%, 20=40.61%, 50=44.63%, 100=2.60% 00:32:08.010 cpu : usr=2.47%, sys=3.45%, ctx=307, majf=0, minf=1 00:32:08.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:32:08.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:08.010 issued rwts: total=2574,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:08.010 job1: (groupid=0, jobs=1): err= 0: pid=2130604: Mon Dec 9 17:42:34 2024 00:32:08.010 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:32:08.010 slat (nsec): min=1324, max=47578k, avg=143603.15, stdev=1287184.47 00:32:08.010 clat (usec): min=4813, max=74867, avg=16232.09, stdev=12219.92 00:32:08.010 lat (usec): min=4832, max=74871, avg=16375.69, stdev=12289.69 00:32:08.010 clat percentiles (usec): 00:32:08.010 | 1.00th=[ 6063], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9765], 00:32:08.010 | 30.00th=[11863], 40.00th=[13435], 50.00th=[13829], 60.00th=[13960], 00:32:08.010 | 70.00th=[14615], 80.00th=[16909], 90.00th=[19530], 95.00th=[33424], 00:32:08.010 | 99.00th=[73925], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:32:08.010 | 99.99th=[74974] 00:32:08.010 write: IOPS=2619, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1005msec); 0 zone resets 00:32:08.010 slat (usec): min=2, max=40853, avg=233.29, stdev=1598.82 00:32:08.010 clat (usec): min=1179, max=162263, avg=28903.25, stdev=33339.54 00:32:08.010 lat (usec): min=1224, max=162274, avg=29136.53, stdev=33568.63 00:32:08.010 clat percentiles (msec): 00:32:08.010 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:32:08.010 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 15], 60.00th=[ 24], 00:32:08.010 | 70.00th=[ 26], 80.00th=[ 43], 90.00th=[ 74], 95.00th=[ 113], 00:32:08.010 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 163], 99.95th=[ 163], 00:32:08.010 | 99.99th=[ 163] 00:32:08.010 bw ( KiB/s): min= 6768, max=13712, per=16.34%, avg=10240.00, stdev=4910.15, samples=2 00:32:08.010 iops : min= 1692, max= 3428, avg=2560.00, stdev=1227.54, samples=2 00:32:08.010 lat (msec) : 2=0.02%, 4=0.39%, 10=28.40%, 20=43.10%, 50=17.99% 00:32:08.010 lat (msec) : 100=7.20%, 250=2.91% 00:32:08.010 cpu : usr=1.20%, sys=3.49%, ctx=250, majf=0, minf=1 00:32:08.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:32:08.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:08.010 issued rwts: total=2560,2633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:08.010 job2: (groupid=0, jobs=1): err= 0: pid=2130605: Mon Dec 9 17:42:34 2024 00:32:08.010 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:32:08.010 slat (nsec): min=1618, max=21276k, avg=151727.95, stdev=1156713.95 00:32:08.010 clat (usec): min=4379, max=58451, avg=18091.59, stdev=10596.79 00:32:08.010 lat (usec): min=4389, max=58461, avg=18243.32, stdev=10682.86 00:32:08.010 clat percentiles (usec): 00:32:08.010 | 1.00th=[ 7570], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10814], 00:32:08.010 | 30.00th=[11338], 40.00th=[13042], 50.00th=[14615], 60.00th=[15795], 00:32:08.010 | 70.00th=[19268], 80.00th=[22938], 90.00th=[29492], 95.00th=[46400], 00:32:08.010 | 99.00th=[55313], 99.50th=[56361], 99.90th=[58459], 99.95th=[58459], 00:32:08.010 | 99.99th=[58459] 00:32:08.010 write: IOPS=3497, BW=13.7MiB/s (14.3MB/s)(13.9MiB/1015msec); 0 zone resets 00:32:08.010 slat (usec): min=3, max=16723, avg=144.69, stdev=771.38 00:32:08.010 clat (usec): min=3117, max=58413, avg=20599.63, stdev=10050.17 00:32:08.010 lat (usec): min=3128, max=58417, avg=20744.32, stdev=10114.36 00:32:08.010 clat percentiles (usec): 00:32:08.010 | 1.00th=[ 5735], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[11863], 00:32:08.010 | 30.00th=[13829], 40.00th=[16909], 50.00th=[18744], 60.00th=[20841], 00:32:08.010 | 70.00th=[24773], 80.00th=[29754], 90.00th=[35914], 95.00th=[39584], 00:32:08.010 | 99.00th=[46400], 99.50th=[47973], 99.90th=[56361], 99.95th=[58459], 00:32:08.010 | 99.99th=[58459] 00:32:08.010 bw ( KiB/s): min=11584, max=15792, per=21.84%, avg=13688.00, stdev=2975.51, samples=2 00:32:08.010 iops : min= 2896, max= 3948, avg=3422.00, stdev=743.88, samples=2 00:32:08.010 lat (msec) : 4=0.09%, 10=13.09%, 20=49.77%, 50=35.58%, 100=1.46% 00:32:08.010 cpu : usr=2.96%, sys=4.54%, ctx=313, majf=0, minf=1 00:32:08.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:32:08.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:08.010 issued rwts: total=3072,3550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:08.010 job3: (groupid=0, jobs=1): err= 0: pid=2130606: Mon Dec 9 17:42:34 2024 00:32:08.010 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:32:08.010 slat (nsec): min=1335, max=25598k, avg=80435.32, stdev=694015.09 00:32:08.010 clat (usec): min=1726, max=49020, avg=10709.47, stdev=4857.97 00:32:08.010 lat (usec): min=1737, max=49044, avg=10789.90, stdev=4904.72 00:32:08.010 clat percentiles (usec): 00:32:08.010 | 1.00th=[ 5211], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7767], 00:32:08.010 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10028], 00:32:08.010 | 70.00th=[11207], 80.00th=[12649], 90.00th=[15401], 95.00th=[19792], 00:32:08.010 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:32:08.010 | 99.99th=[49021] 00:32:08.010 write: IOPS=6604, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:32:08.010 slat (usec): min=2, max=8974, avg=71.29, stdev=488.30 00:32:08.010 clat (usec): min=1566, max=23986, avg=9317.13, stdev=2655.85 00:32:08.010 lat (usec): min=1583, max=23997, avg=9388.42, stdev=2686.25 00:32:08.010 clat percentiles (usec): 00:32:08.010 | 1.00th=[ 4359], 5.00th=[ 5604], 10.00th=[ 6456], 20.00th=[ 7635], 00:32:08.010 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:32:08.010 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[12518], 95.00th=[15008], 00:32:08.010 | 99.00th=[18744], 99.50th=[19792], 99.90th=[21627], 99.95th=[23987], 00:32:08.010 | 99.99th=[23987] 00:32:08.010 bw ( KiB/s): min=23864, max=28320, per=41.62%, avg=26092.00, stdev=3150.87, samples=2 00:32:08.010 iops : min= 5966, max= 7080, avg=6523.00, stdev=787.72, samples=2 00:32:08.010 lat (msec) : 2=0.11%, 4=0.30%, 10=67.96%, 20=29.07%, 50=2.56% 00:32:08.010 cpu : usr=5.86%, sys=6.26%, ctx=514, majf=0, minf=2 00:32:08.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:32:08.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:08.010 issued rwts: total=6144,6651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:08.010 00:32:08.010 Run status group 0 (all jobs): 00:32:08.010 READ: bw=55.2MiB/s (57.9MB/s), 9.91MiB/s-23.8MiB/s (10.4MB/s-25.0MB/s), io=56.1MiB (58.8MB), run=1005-1015msec 00:32:08.010 WRITE: bw=61.2MiB/s (64.2MB/s), 10.2MiB/s-25.8MiB/s (10.7MB/s-27.1MB/s), io=62.1MiB (65.1MB), run=1005-1015msec 00:32:08.010 00:32:08.010 Disk stats (read/write): 00:32:08.011 nvme0n1: ios=2319/2560, merge=0/0, ticks=40415/61003, in_queue=101418, util=97.49% 00:32:08.011 nvme0n2: ios=2080/2343, merge=0/0, ticks=15779/24707, in_queue=40486, util=96.95% 00:32:08.011 nvme0n3: ios=2590/3055, merge=0/0, ticks=45586/58046, in_queue=103632, util=98.44% 00:32:08.011 nvme0n4: ios=5145/5503, merge=0/0, ticks=48881/45423, in_queue=94304, util=98.43% 00:32:08.011 17:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:08.011 [global] 00:32:08.011 thread=1 00:32:08.011 invalidate=1 00:32:08.011 rw=randwrite 00:32:08.011 time_based=1 00:32:08.011 runtime=1 00:32:08.011 ioengine=libaio 00:32:08.011 direct=1 00:32:08.011 bs=4096 00:32:08.011 iodepth=128 00:32:08.011 norandommap=0 00:32:08.011 numjobs=1 00:32:08.011 00:32:08.011 verify_dump=1 00:32:08.011 verify_backlog=512 00:32:08.011 verify_state_save=0 00:32:08.011 do_verify=1 00:32:08.011 verify=crc32c-intel 00:32:08.011 [job0] 00:32:08.011 filename=/dev/nvme0n1 00:32:08.011 [job1] 00:32:08.011 filename=/dev/nvme0n2 00:32:08.011 [job2] 00:32:08.011 filename=/dev/nvme0n3 00:32:08.011 [job3] 00:32:08.011 filename=/dev/nvme0n4 00:32:08.011 Could not set queue depth (nvme0n1) 00:32:08.011 Could not set queue depth (nvme0n2) 00:32:08.011 Could not set queue depth (nvme0n3) 00:32:08.011 Could not set queue depth (nvme0n4) 00:32:08.269 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.269 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.269 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.269 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.269 fio-3.35 00:32:08.269 Starting 4 threads 00:32:09.643 00:32:09.643 job0: (groupid=0, jobs=1): err= 0: pid=2130970: Mon Dec 9 17:42:35 2024 00:32:09.643 read: IOPS=2315, BW=9264KiB/s (9486kB/s)(9384KiB/1013msec) 00:32:09.643 slat (nsec): min=1896, max=19430k, avg=147892.87, stdev=1034820.46 00:32:09.643 clat (usec): min=7150, max=52364, avg=18250.61, stdev=8357.17 00:32:09.643 lat (usec): min=8478, max=52388, avg=18398.50, stdev=8446.00 00:32:09.643 clat percentiles (usec): 00:32:09.643 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[11863], 20.00th=[11994], 00:32:09.643 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13435], 60.00th=[17171], 00:32:09.643 | 70.00th=[20579], 80.00th=[25560], 90.00th=[32637], 95.00th=[38011], 00:32:09.643 | 99.00th=[40109], 99.50th=[44303], 99.90th=[45351], 99.95th=[51643], 00:32:09.643 | 99.99th=[52167] 00:32:09.643 write: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec); 0 zone resets 00:32:09.643 slat (usec): min=2, max=34222, avg=248.95, stdev=1599.00 00:32:09.643 clat (usec): min=10008, max=84546, avg=33025.24, stdev=13417.58 00:32:09.643 lat (usec): min=10019, max=84581, avg=33274.19, stdev=13530.74 00:32:09.643 clat percentiles (usec): 00:32:09.643 | 1.00th=[12518], 5.00th=[16057], 10.00th=[18220], 20.00th=[18744], 00:32:09.643 | 30.00th=[24511], 40.00th=[25560], 50.00th=[30540], 60.00th=[36439], 00:32:09.643 | 70.00th=[39584], 80.00th=[45351], 90.00th=[53216], 95.00th=[56361], 00:32:09.643 | 99.00th=[64226], 99.50th=[64226], 99.90th=[70779], 99.95th=[77071], 00:32:09.643 | 99.99th=[84411] 00:32:09.643 bw ( KiB/s): min= 8440, max=12040, per=16.36%, avg=10240.00, stdev=2545.58, samples=2 00:32:09.643 iops : min= 2110, max= 3010, avg=2560.00, stdev=636.40, samples=2 00:32:09.643 lat (msec) : 10=0.88%, 20=44.05%, 50=46.33%, 100=8.74% 00:32:09.643 cpu : usr=2.67%, sys=3.75%, ctx=220, majf=0, minf=1 00:32:09.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:09.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.643 issued rwts: total=2346,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.643 job1: (groupid=0, jobs=1): err= 0: pid=2130971: Mon Dec 9 17:42:35 2024 00:32:09.643 read: IOPS=3148, BW=12.3MiB/s (12.9MB/s)(12.5MiB/1013msec) 00:32:09.643 slat (nsec): min=1414, max=16437k, avg=131762.06, stdev=890799.89 00:32:09.643 clat (usec): min=5606, max=76717, avg=15282.78, stdev=8095.18 00:32:09.643 lat (usec): min=5617, max=76727, avg=15414.54, stdev=8195.12 00:32:09.643 clat percentiles (usec): 00:32:09.643 | 1.00th=[ 7308], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10290], 00:32:09.643 | 30.00th=[11731], 40.00th=[12649], 50.00th=[12911], 60.00th=[13304], 00:32:09.643 | 70.00th=[15008], 80.00th=[19268], 90.00th=[21627], 95.00th=[28967], 00:32:09.643 | 99.00th=[60031], 99.50th=[68682], 99.90th=[77071], 99.95th=[77071], 00:32:09.643 | 99.99th=[77071] 00:32:09.643 write: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec); 0 zone resets 00:32:09.643 slat (usec): min=2, max=17268, avg=144.47, stdev=854.28 00:32:09.643 clat (usec): min=1488, max=79446, avg=22197.24, stdev=16852.08 00:32:09.643 lat (usec): min=1500, max=79458, avg=22341.72, stdev=16948.70 00:32:09.643 clat percentiles (usec): 00:32:09.643 | 1.00th=[ 5669], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 9503], 00:32:09.643 | 30.00th=[10159], 40.00th=[11469], 50.00th=[14615], 60.00th=[19006], 00:32:09.643 | 70.00th=[24773], 80.00th=[37487], 90.00th=[49546], 95.00th=[57410], 00:32:09.643 | 99.00th=[71828], 99.50th=[73925], 99.90th=[79168], 99.95th=[79168], 00:32:09.643 | 99.99th=[79168] 00:32:09.643 bw ( KiB/s): min=10488, max=18096, per=22.83%, avg=14292.00, stdev=5379.67, samples=2 00:32:09.643 iops : min= 2622, max= 4524, avg=3573.00, stdev=1344.92, samples=2 00:32:09.643 lat (msec) : 2=0.03%, 4=0.09%, 10=20.32%, 20=52.96%, 50=20.98% 00:32:09.643 lat (msec) : 100=5.63% 00:32:09.643 cpu : usr=3.75%, sys=4.74%, ctx=280, majf=0, minf=1 00:32:09.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:09.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.643 issued rwts: total=3189,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.643 job2: (groupid=0, jobs=1): err= 0: pid=2130972: Mon Dec 9 17:42:35 2024 00:32:09.643 read: IOPS=6677, BW=26.1MiB/s (27.3MB/s)(26.3MiB/1007msec) 00:32:09.643 slat (nsec): min=1358, max=9268.2k, avg=76258.41, stdev=607617.52 00:32:09.643 clat (usec): min=3380, max=17861, avg=9617.42, stdev=2493.48 00:32:09.643 lat (usec): min=3387, max=21080, avg=9693.68, stdev=2542.45 00:32:09.643 clat percentiles (usec): 00:32:09.643 | 1.00th=[ 5604], 5.00th=[ 6915], 10.00th=[ 7373], 20.00th=[ 8160], 00:32:09.643 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8979], 00:32:09.643 | 70.00th=[ 9503], 80.00th=[11076], 90.00th=[14222], 95.00th=[15139], 00:32:09.643 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17433], 99.95th=[17957], 00:32:09.643 | 99.99th=[17957] 00:32:09.643 write: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec); 0 zone resets 00:32:09.643 slat (usec): min=2, max=8983, avg=61.97, stdev=408.31 00:32:09.643 clat (usec): min=1820, max=19369, avg=8768.87, stdev=2017.48 00:32:09.643 lat (usec): min=1831, max=19393, avg=8830.84, stdev=2037.43 00:32:09.643 clat percentiles (usec): 00:32:09.643 | 1.00th=[ 3621], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 7177], 00:32:09.643 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:32:09.643 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[11207], 95.00th=[12125], 00:32:09.643 | 99.00th=[14484], 99.50th=[14877], 99.90th=[16909], 99.95th=[17957], 00:32:09.643 | 99.99th=[19268] 00:32:09.643 bw ( KiB/s): min=28200, max=28672, per=45.42%, avg=28436.00, stdev=333.75, samples=2 00:32:09.643 iops : min= 7050, max= 7168, avg=7109.00, stdev=83.44, samples=2 00:32:09.643 lat (msec) : 2=0.09%, 4=0.63%, 10=78.49%, 20=20.79% 00:32:09.643 cpu : usr=5.86%, sys=8.15%, ctx=618, majf=0, minf=1 00:32:09.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:32:09.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.643 issued rwts: total=6724,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.643 job3: (groupid=0, jobs=1): err= 0: pid=2130973: Mon Dec 9 17:42:35 2024 00:32:09.643 read: IOPS=2023, BW=8095KiB/s (8289kB/s)(8192KiB/1012msec) 00:32:09.643 slat (nsec): min=1970, max=24693k, avg=150361.24, stdev=1093980.37 00:32:09.643 clat (usec): min=1304, max=82215, avg=17216.78, stdev=10689.54 00:32:09.643 lat (usec): min=1310, max=82218, avg=17367.14, stdev=10789.49 00:32:09.643 clat percentiles (usec): 00:32:09.643 | 1.00th=[ 1631], 5.00th=[ 4359], 10.00th=[ 6390], 20.00th=[10945], 00:32:09.643 | 30.00th=[11076], 40.00th=[11207], 50.00th=[16188], 60.00th=[18220], 00:32:09.643 | 70.00th=[19792], 80.00th=[21365], 90.00th=[29492], 95.00th=[42730], 00:32:09.643 | 99.00th=[53216], 99.50th=[58459], 99.90th=[82314], 99.95th=[82314], 00:32:09.643 | 99.99th=[82314] 00:32:09.643 write: IOPS=2511, BW=9.81MiB/s (10.3MB/s)(9.93MiB/1012msec); 0 zone resets 00:32:09.643 slat (usec): min=2, max=19202, avg=259.42, stdev=1257.45 00:32:09.643 clat (usec): min=1047, max=127547, avg=36635.91, stdev=30594.84 00:32:09.643 lat (usec): min=1058, max=127558, avg=36895.33, stdev=30806.36 00:32:09.643 clat percentiles (msec): 00:32:09.643 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 17], 00:32:09.643 | 30.00th=[ 19], 40.00th=[ 21], 50.00th=[ 24], 60.00th=[ 29], 00:32:09.643 | 70.00th=[ 43], 80.00th=[ 52], 90.00th=[ 89], 95.00th=[ 115], 00:32:09.643 | 99.00th=[ 123], 99.50th=[ 128], 99.90th=[ 128], 99.95th=[ 128], 00:32:09.643 | 99.99th=[ 128] 00:32:09.643 bw ( KiB/s): min= 8192, max=11120, per=15.42%, avg=9656.00, stdev=2070.41, samples=2 00:32:09.643 iops : min= 2048, max= 2780, avg=2414.00, stdev=517.60, samples=2 00:32:09.643 lat (msec) : 2=0.83%, 4=1.68%, 10=6.75%, 20=45.23%, 50=31.55% 00:32:09.643 lat (msec) : 100=9.11%, 250=4.86% 00:32:09.643 cpu : usr=2.37%, sys=3.66%, ctx=261, majf=0, minf=2 00:32:09.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:32:09.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.643 issued rwts: total=2048,2542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.643 00:32:09.643 Run status group 0 (all jobs): 00:32:09.643 READ: bw=55.2MiB/s (57.8MB/s), 8095KiB/s-26.1MiB/s (8289kB/s-27.3MB/s), io=55.9MiB (58.6MB), run=1007-1013msec 00:32:09.643 WRITE: bw=61.1MiB/s (64.1MB/s), 9.81MiB/s-27.8MiB/s (10.3MB/s-29.2MB/s), io=61.9MiB (64.9MB), run=1007-1013msec 00:32:09.643 00:32:09.643 Disk stats (read/write): 00:32:09.643 nvme0n1: ios=1642/2048, merge=0/0, ticks=16225/36673, in_queue=52898, util=98.10% 00:32:09.643 nvme0n2: ios=3096/3215, merge=0/0, ticks=43092/60552, in_queue=103644, util=97.36% 00:32:09.643 nvme0n3: ios=5674/6088, merge=0/0, ticks=53105/51601, in_queue=104706, util=96.25% 00:32:09.643 nvme0n4: ios=1717/1991, merge=0/0, ticks=25873/78824, in_queue=104697, util=89.72% 00:32:09.643 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:09.643 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2131200 00:32:09.643 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:09.644 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:09.644 [global] 00:32:09.644 thread=1 00:32:09.644 invalidate=1 00:32:09.644 rw=read 00:32:09.644 time_based=1 00:32:09.644 runtime=10 00:32:09.644 ioengine=libaio 00:32:09.644 direct=1 00:32:09.644 bs=4096 00:32:09.644 iodepth=1 00:32:09.644 norandommap=1 00:32:09.644 numjobs=1 00:32:09.644 00:32:09.644 [job0] 00:32:09.644 filename=/dev/nvme0n1 00:32:09.644 [job1] 00:32:09.644 filename=/dev/nvme0n2 00:32:09.644 [job2] 00:32:09.644 filename=/dev/nvme0n3 00:32:09.644 [job3] 00:32:09.644 filename=/dev/nvme0n4 00:32:09.644 Could not set queue depth (nvme0n1) 00:32:09.644 Could not set queue depth (nvme0n2) 00:32:09.644 Could not set queue depth (nvme0n3) 00:32:09.644 Could not set queue depth (nvme0n4) 00:32:09.901 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:09.901 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:09.901 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:09.901 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:09.901 fio-3.35 00:32:09.901 Starting 4 threads 00:32:12.426 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:12.684 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2285568, buflen=4096 00:32:12.684 fio: pid=2131344, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:12.684 17:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:12.942 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=15806464, buflen=4096 00:32:12.942 fio: pid=2131341, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:12.942 17:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:12.942 17:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:13.201 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6320128, buflen=4096 00:32:13.201 fio: pid=2131339, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:13.201 17:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:13.201 17:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:13.459 17:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:13.459 17:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:13.459 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=335872, buflen=4096 00:32:13.459 fio: pid=2131340, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:13.459 00:32:13.459 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2131339: Mon Dec 9 17:42:39 2024 00:32:13.459 read: IOPS=492, BW=1968KiB/s (2015kB/s)(6172KiB/3136msec) 00:32:13.459 slat (usec): min=2, max=11387, avg=22.33, stdev=397.94 00:32:13.459 clat (usec): min=183, max=42015, avg=1994.42, stdev=8325.85 00:32:13.459 lat (usec): min=190, max=52640, avg=2016.75, stdev=8405.71 00:32:13.459 clat percentiles (usec): 00:32:13.459 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 210], 00:32:13.459 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 223], 00:32:13.459 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 273], 00:32:13.459 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:32:13.459 | 99.99th=[42206] 00:32:13.459 bw ( KiB/s): min= 96, max=10232, per=27.31%, avg=1954.83, stdev=4075.32, samples=6 00:32:13.459 iops : min= 24, max= 2558, avg=488.67, stdev=1018.84, samples=6 00:32:13.459 lat (usec) : 250=93.01%, 500=2.59% 00:32:13.459 lat (msec) : 50=4.34% 00:32:13.459 cpu : usr=0.00%, sys=0.64%, ctx=1549, majf=0, minf=2 00:32:13.459 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.459 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.459 issued rwts: total=1544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.459 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:13.459 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2131340: Mon Dec 9 17:42:39 2024 00:32:13.459 read: IOPS=24, BW=97.1KiB/s (99.4kB/s)(328KiB/3378msec) 00:32:13.459 slat (usec): min=9, max=10810, avg=234.62, stdev=1388.00 00:32:13.459 clat (usec): min=349, max=53221, avg=40683.65, stdev=4712.18 00:32:13.459 lat (usec): min=385, max=64031, avg=40838.75, stdev=5191.36 00:32:13.459 clat percentiles (usec): 00:32:13.459 | 1.00th=[ 351], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:13.459 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:13.459 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:13.459 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:32:13.459 | 99.99th=[53216] 00:32:13.459 bw ( KiB/s): min= 93, max= 104, per=1.37%, avg=98.17, stdev= 4.67, samples=6 00:32:13.459 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:32:13.459 lat (usec) : 500=1.20% 00:32:13.459 lat (msec) : 50=96.39%, 100=1.20% 00:32:13.459 cpu : usr=0.00%, sys=0.12%, ctx=87, majf=0, minf=2 00:32:13.459 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.459 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.459 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.459 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:13.459 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2131341: Mon Dec 9 17:42:39 2024 00:32:13.459 read: IOPS=1321, BW=5286KiB/s (5413kB/s)(15.1MiB/2920msec) 00:32:13.459 slat (nsec): min=6490, max=62825, avg=7584.69, stdev=2238.58 00:32:13.459 clat (usec): min=178, max=42887, avg=742.25, stdev=4615.24 00:32:13.459 lat (usec): min=186, max=42909, avg=749.83, stdev=4617.01 00:32:13.459 clat percentiles (usec): 00:32:13.459 | 1.00th=[ 198], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 204], 00:32:13.459 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 212], 00:32:13.459 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 243], 00:32:13.459 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:32:13.459 | 99.99th=[42730] 00:32:13.459 bw ( KiB/s): min= 96, max=16440, per=47.08%, avg=3368.00, stdev=7307.47, samples=5 00:32:13.459 iops : min= 24, max= 4110, avg=842.00, stdev=1826.87, samples=5 00:32:13.459 lat (usec) : 250=96.94%, 500=1.71%, 750=0.03% 00:32:13.459 lat (msec) : 50=1.30% 00:32:13.459 cpu : usr=0.38%, sys=1.23%, ctx=3861, majf=0, minf=2 00:32:13.459 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.459 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.459 issued rwts: total=3860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.459 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:13.459 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2131344: Mon Dec 9 17:42:39 2024 00:32:13.459 read: IOPS=206, BW=824KiB/s (844kB/s)(2232KiB/2709msec) 00:32:13.459 slat (nsec): min=6735, max=46249, avg=9302.93, stdev=5302.64 00:32:13.459 clat (usec): min=208, max=41920, avg=4804.68, stdev=12801.84 00:32:13.459 lat (usec): min=215, max=41943, avg=4813.96, stdev=12806.42 00:32:13.459 clat percentiles (usec): 00:32:13.459 | 1.00th=[ 215], 5.00th=[ 249], 10.00th=[ 269], 20.00th=[ 277], 00:32:13.459 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 281], 60.00th=[ 285], 00:32:13.459 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[41157], 95.00th=[41157], 00:32:13.459 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:32:13.459 | 99.99th=[41681] 00:32:13.459 bw ( KiB/s): min= 96, max= 1248, per=4.58%, avg=328.00, stdev=514.31, samples=5 00:32:13.459 iops : min= 24, max= 312, avg=82.00, stdev=128.58, samples=5 00:32:13.459 lat (usec) : 250=5.72%, 500=83.01% 00:32:13.459 lat (msec) : 50=11.09% 00:32:13.459 cpu : usr=0.07%, sys=0.22%, ctx=562, majf=0, minf=1 00:32:13.459 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.459 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.459 issued rwts: total=559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.459 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:13.459 00:32:13.459 Run status group 0 (all jobs): 00:32:13.459 READ: bw=7155KiB/s (7326kB/s), 97.1KiB/s-5286KiB/s (99.4kB/s-5413kB/s), io=23.6MiB (24.7MB), run=2709-3378msec 00:32:13.459 00:32:13.459 Disk stats (read/write): 00:32:13.459 nvme0n1: ios=1533/0, merge=0/0, ticks=3538/0, in_queue=3538, util=98.34% 00:32:13.459 nvme0n2: ios=82/0, merge=0/0, ticks=3338/0, in_queue=3338, util=95.98% 00:32:13.459 nvme0n3: ios=3653/0, merge=0/0, ticks=2803/0, in_queue=2803, util=96.48% 00:32:13.459 nvme0n4: ios=596/0, merge=0/0, ticks=3398/0, in_queue=3398, util=98.85% 00:32:13.717 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:13.717 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:13.717 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:13.717 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:13.975 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:13.975 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:14.232 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:14.233 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2131200 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:14.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:14.491 nvmf hotplug test: fio failed as expected 00:32:14.491 17:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:14.749 rmmod nvme_tcp 00:32:14.749 rmmod nvme_fabrics 00:32:14.749 rmmod nvme_keyring 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2128734 ']' 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2128734 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2128734 ']' 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2128734 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:14.749 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2128734 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2128734' 00:32:15.009 killing process with pid 2128734 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2128734 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2128734 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.009 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:17.546 00:32:17.546 real 0m25.830s 00:32:17.546 user 1m31.598s 00:32:17.546 sys 0m10.920s 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.546 ************************************ 00:32:17.546 END TEST nvmf_fio_target 00:32:17.546 ************************************ 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:17.546 ************************************ 00:32:17.546 START TEST nvmf_bdevio 00:32:17.546 ************************************ 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:17.546 * Looking for test storage... 00:32:17.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:17.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.546 --rc genhtml_branch_coverage=1 00:32:17.546 --rc genhtml_function_coverage=1 00:32:17.546 --rc genhtml_legend=1 00:32:17.546 --rc geninfo_all_blocks=1 00:32:17.546 --rc geninfo_unexecuted_blocks=1 00:32:17.546 00:32:17.546 ' 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:17.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.546 --rc genhtml_branch_coverage=1 00:32:17.546 --rc genhtml_function_coverage=1 00:32:17.546 --rc genhtml_legend=1 00:32:17.546 --rc geninfo_all_blocks=1 00:32:17.546 --rc geninfo_unexecuted_blocks=1 00:32:17.546 00:32:17.546 ' 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:17.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.546 --rc genhtml_branch_coverage=1 00:32:17.546 --rc genhtml_function_coverage=1 00:32:17.546 --rc genhtml_legend=1 00:32:17.546 --rc geninfo_all_blocks=1 00:32:17.546 --rc geninfo_unexecuted_blocks=1 00:32:17.546 00:32:17.546 ' 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:17.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.546 --rc genhtml_branch_coverage=1 00:32:17.546 --rc genhtml_function_coverage=1 00:32:17.546 --rc genhtml_legend=1 00:32:17.546 --rc geninfo_all_blocks=1 00:32:17.546 --rc geninfo_unexecuted_blocks=1 00:32:17.546 00:32:17.546 ' 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.546 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:17.547 17:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:24.115 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:24.116 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:24.116 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:24.116 Found net devices under 0000:af:00.0: cvl_0_0 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:24.116 Found net devices under 0000:af:00.1: cvl_0_1 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:24.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:24.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:32:24.116 00:32:24.116 --- 10.0.0.2 ping statistics --- 00:32:24.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.116 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:24.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:24.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:32:24.116 00:32:24.116 --- 10.0.0.1 ping statistics --- 00:32:24.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.116 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2135715 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2135715 00:32:24.116 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:24.117 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2135715 ']' 00:32:24.117 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.117 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:24.117 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.117 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:24.117 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.117 [2024-12-09 17:42:49.751802] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:24.117 [2024-12-09 17:42:49.752697] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:32:24.117 [2024-12-09 17:42:49.752729] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.117 [2024-12-09 17:42:49.830717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:24.117 [2024-12-09 17:42:49.871150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.117 [2024-12-09 17:42:49.871190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.117 [2024-12-09 17:42:49.871197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:24.117 [2024-12-09 17:42:49.871204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:24.117 [2024-12-09 17:42:49.871209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.117 [2024-12-09 17:42:49.872597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:24.117 [2024-12-09 17:42:49.872706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:24.117 [2024-12-09 17:42:49.872814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:24.117 [2024-12-09 17:42:49.872815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:24.117 [2024-12-09 17:42:49.939657] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:24.117 [2024-12-09 17:42:49.940535] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:24.117 [2024-12-09 17:42:49.940622] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:24.117 [2024-12-09 17:42:49.940821] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:24.117 [2024-12-09 17:42:49.940869] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:24.117 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:24.117 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:24.117 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:24.117 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:24.117 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.117 17:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.117 [2024-12-09 17:42:50.009632] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.117 Malloc0 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.117 [2024-12-09 17:42:50.089774] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:24.117 { 00:32:24.117 "params": { 00:32:24.117 "name": "Nvme$subsystem", 00:32:24.117 "trtype": "$TEST_TRANSPORT", 00:32:24.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:24.117 "adrfam": "ipv4", 00:32:24.117 "trsvcid": "$NVMF_PORT", 00:32:24.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:24.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:24.117 "hdgst": ${hdgst:-false}, 00:32:24.117 "ddgst": ${ddgst:-false} 00:32:24.117 }, 00:32:24.117 "method": "bdev_nvme_attach_controller" 00:32:24.117 } 00:32:24.117 EOF 00:32:24.117 )") 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:24.117 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:24.117 "params": { 00:32:24.117 "name": "Nvme1", 00:32:24.117 "trtype": "tcp", 00:32:24.117 "traddr": "10.0.0.2", 00:32:24.117 "adrfam": "ipv4", 00:32:24.117 "trsvcid": "4420", 00:32:24.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:24.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:24.117 "hdgst": false, 00:32:24.117 "ddgst": false 00:32:24.117 }, 00:32:24.117 "method": "bdev_nvme_attach_controller" 00:32:24.117 }' 00:32:24.117 [2024-12-09 17:42:50.140797] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:32:24.117 [2024-12-09 17:42:50.140850] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135740 ] 00:32:24.117 [2024-12-09 17:42:50.216971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:24.117 [2024-12-09 17:42:50.259236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.117 [2024-12-09 17:42:50.259341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.117 [2024-12-09 17:42:50.259342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:24.117 I/O targets: 00:32:24.117 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:24.117 00:32:24.117 00:32:24.117 CUnit - A unit testing framework for C - Version 2.1-3 00:32:24.117 http://cunit.sourceforge.net/ 00:32:24.117 00:32:24.117 00:32:24.117 Suite: bdevio tests on: Nvme1n1 00:32:24.117 Test: blockdev write read block ...passed 00:32:24.117 Test: blockdev write zeroes read block ...passed 00:32:24.117 Test: blockdev write zeroes read no split ...passed 00:32:24.117 Test: blockdev write zeroes read split ...passed 00:32:24.117 Test: blockdev write zeroes read split partial ...passed 00:32:24.117 Test: blockdev reset ...[2024-12-09 17:42:50.517035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:24.117 [2024-12-09 17:42:50.517094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aea4f0 (9): Bad file descriptor 00:32:24.117 [2024-12-09 17:42:50.561070] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:24.117 passed 00:32:24.117 Test: blockdev write read 8 blocks ...passed 00:32:24.117 Test: blockdev write read size > 128k ...passed 00:32:24.117 Test: blockdev write read invalid size ...passed 00:32:24.117 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:24.117 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:24.117 Test: blockdev write read max offset ...passed 00:32:24.376 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:24.376 Test: blockdev writev readv 8 blocks ...passed 00:32:24.376 Test: blockdev writev readv 30 x 1block ...passed 00:32:24.376 Test: blockdev writev readv block ...passed 00:32:24.376 Test: blockdev writev readv size > 128k ...passed 00:32:24.376 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:24.376 Test: blockdev comparev and writev ...[2024-12-09 17:42:50.771047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:24.376 [2024-12-09 17:42:50.771076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:24.376 [2024-12-09 17:42:50.771090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:24.376 [2024-12-09 17:42:50.771099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:24.376 [2024-12-09 17:42:50.771391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:24.376 [2024-12-09 17:42:50.771402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:24.376 [2024-12-09 17:42:50.771413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:24.376 [2024-12-09 17:42:50.771421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:24.376 [2024-12-09 17:42:50.771711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:24.376 [2024-12-09 17:42:50.771726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:24.376 [2024-12-09 17:42:50.771738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:24.376 [2024-12-09 17:42:50.771745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:24.376 [2024-12-09 17:42:50.772031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:24.376 [2024-12-09 17:42:50.772043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:24.376 [2024-12-09 17:42:50.772054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:24.376 [2024-12-09 17:42:50.772062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:24.376 passed 00:32:24.376 Test: blockdev nvme passthru rw ...passed 00:32:24.376 Test: blockdev nvme passthru vendor specific ...[2024-12-09 17:42:50.854481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:24.376 [2024-12-09 17:42:50.854499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:24.376 [2024-12-09 17:42:50.854611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:24.376 [2024-12-09 17:42:50.854620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:24.376 [2024-12-09 17:42:50.854724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:24.376 [2024-12-09 17:42:50.854733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:24.376 [2024-12-09 17:42:50.854832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:24.376 [2024-12-09 17:42:50.854841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:24.376 passed 00:32:24.376 Test: blockdev nvme admin passthru ...passed 00:32:24.376 Test: blockdev copy ...passed 00:32:24.376 00:32:24.376 Run Summary: Type Total Ran Passed Failed Inactive 00:32:24.376 suites 1 1 n/a 0 0 00:32:24.376 tests 23 23 23 0 0 00:32:24.376 asserts 152 152 152 0 n/a 00:32:24.376 00:32:24.376 Elapsed time = 1.004 seconds 00:32:24.634 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:24.634 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.634 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:24.635 rmmod nvme_tcp 00:32:24.635 rmmod nvme_fabrics 00:32:24.635 rmmod nvme_keyring 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2135715 ']' 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2135715 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2135715 ']' 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2135715 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.635 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2135715 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2135715' 00:32:24.894 killing process with pid 2135715 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2135715 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2135715 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.894 17:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.430 17:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:27.430 00:32:27.430 real 0m9.822s 00:32:27.430 user 0m8.059s 00:32:27.430 sys 0m5.187s 00:32:27.430 17:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.430 17:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:27.430 ************************************ 00:32:27.430 END TEST nvmf_bdevio 00:32:27.430 ************************************ 00:32:27.430 17:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:27.430 00:32:27.430 real 4m31.104s 00:32:27.430 user 9m10.385s 00:32:27.430 sys 1m51.237s 00:32:27.430 17:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.430 17:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:27.430 ************************************ 00:32:27.430 END TEST nvmf_target_core_interrupt_mode 00:32:27.430 ************************************ 00:32:27.430 17:42:53 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:27.430 17:42:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:27.430 17:42:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.430 17:42:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:27.430 ************************************ 00:32:27.430 START TEST nvmf_interrupt 00:32:27.430 ************************************ 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:27.430 * Looking for test storage... 00:32:27.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:27.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.430 --rc genhtml_branch_coverage=1 00:32:27.430 --rc genhtml_function_coverage=1 00:32:27.430 --rc genhtml_legend=1 00:32:27.430 --rc geninfo_all_blocks=1 00:32:27.430 --rc geninfo_unexecuted_blocks=1 00:32:27.430 00:32:27.430 ' 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:27.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.430 --rc genhtml_branch_coverage=1 00:32:27.430 --rc genhtml_function_coverage=1 00:32:27.430 --rc genhtml_legend=1 00:32:27.430 --rc geninfo_all_blocks=1 00:32:27.430 --rc geninfo_unexecuted_blocks=1 00:32:27.430 00:32:27.430 ' 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:27.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.430 --rc genhtml_branch_coverage=1 00:32:27.430 --rc genhtml_function_coverage=1 00:32:27.430 --rc genhtml_legend=1 00:32:27.430 --rc geninfo_all_blocks=1 00:32:27.430 --rc geninfo_unexecuted_blocks=1 00:32:27.430 00:32:27.430 ' 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:27.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.430 --rc genhtml_branch_coverage=1 00:32:27.430 --rc genhtml_function_coverage=1 00:32:27.430 --rc genhtml_legend=1 00:32:27.430 --rc geninfo_all_blocks=1 00:32:27.430 --rc geninfo_unexecuted_blocks=1 00:32:27.430 00:32:27.430 ' 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:27.430 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:27.431 17:42:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:34.000 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.000 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:34.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:34.001 Found net devices under 0000:af:00.0: cvl_0_0 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:34.001 Found net devices under 0000:af:00.1: cvl_0_1 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:34.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:32:34.001 00:32:34.001 --- 10.0.0.2 ping statistics --- 00:32:34.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.001 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:32:34.001 00:32:34.001 --- 10.0.0.1 ping statistics --- 00:32:34.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.001 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2139394 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2139394 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2139394 ']' 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:34.001 [2024-12-09 17:42:59.659676] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:34.001 [2024-12-09 17:42:59.660630] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:32:34.001 [2024-12-09 17:42:59.660664] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.001 [2024-12-09 17:42:59.738980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:34.001 [2024-12-09 17:42:59.778371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.001 [2024-12-09 17:42:59.778406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.001 [2024-12-09 17:42:59.778412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.001 [2024-12-09 17:42:59.778418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.001 [2024-12-09 17:42:59.778423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.001 [2024-12-09 17:42:59.779564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.001 [2024-12-09 17:42:59.779567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.001 [2024-12-09 17:42:59.847481] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:34.001 [2024-12-09 17:42:59.848024] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:34.001 [2024-12-09 17:42:59.848209] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:34.001 5000+0 records in 00:32:34.001 5000+0 records out 00:32:34.001 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0176783 s, 579 MB/s 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:34.001 AIO0 00:32:34.001 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.002 17:42:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:34.002 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.002 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:34.002 [2024-12-09 17:42:59.972374] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.002 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.002 17:42:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:34.002 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.002 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:34.002 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.002 17:42:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:34.002 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.002 17:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:34.002 [2024-12-09 17:43:00.012746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2139394 0 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2139394 0 idle 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2139394 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2139394 -w 256 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2139394 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.24 reactor_0' 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2139394 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.24 reactor_0 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2139394 1 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2139394 1 idle 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2139394 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2139394 -w 256 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2139440 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2139440 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2139480 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2139394 0 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2139394 0 busy 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2139394 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2139394 -w 256 00:32:34.002 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:34.261 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2139394 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:00.26 reactor_0' 00:32:34.261 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2139394 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:00.26 reactor_0 00:32:34.261 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:34.261 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:34.261 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:34.261 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:34.261 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:34.261 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:34.261 17:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:32:35.195 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:32:35.195 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:35.195 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2139394 -w 256 00:32:35.195 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2139394 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.55 reactor_0' 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2139394 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.55 reactor_0 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2139394 1 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2139394 1 busy 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2139394 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2139394 -w 256 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2139440 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.33 reactor_1' 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2139440 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.33 reactor_1 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:35.454 17:43:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2139480 00:32:45.425 Initializing NVMe Controllers 00:32:45.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:45.425 Controller IO queue size 256, less than required. 00:32:45.425 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:45.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:45.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:45.426 Initialization complete. Launching workers. 00:32:45.426 ======================================================== 00:32:45.426 Latency(us) 00:32:45.426 Device Information : IOPS MiB/s Average min max 00:32:45.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16870.93 65.90 15182.14 2968.93 30746.16 00:32:45.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 17055.63 66.62 15014.11 7262.52 26392.20 00:32:45.426 ======================================================== 00:32:45.426 Total : 33926.56 132.53 15097.67 2968.93 30746.16 00:32:45.426 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2139394 0 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2139394 0 idle 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2139394 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2139394 -w 256 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2139394 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.23 reactor_0' 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2139394 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.23 reactor_0 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2139394 1 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2139394 1 idle 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2139394 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2139394 -w 256 00:32:45.426 17:43:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2139440 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2139440 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:45.426 17:43:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2139394 0 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2139394 0 idle 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2139394 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2139394 -w 256 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:47.335 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2139394 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.47 reactor_0' 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2139394 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.47 reactor_0 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2139394 1 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2139394 1 idle 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2139394 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2139394 -w 256 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2139440 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.09 reactor_1' 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2139440 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.09 reactor_1 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:47.336 17:43:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:47.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.596 17:43:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.596 rmmod nvme_tcp 00:32:47.596 rmmod nvme_fabrics 00:32:47.596 rmmod nvme_keyring 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2139394 ']' 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2139394 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2139394 ']' 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2139394 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2139394 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.596 17:43:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2139394' 00:32:47.596 killing process with pid 2139394 00:32:47.597 17:43:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2139394 00:32:47.597 17:43:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2139394 00:32:47.857 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:47.857 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:47.857 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:47.857 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:47.857 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:47.857 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:47.857 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:47.857 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.857 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.857 17:43:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.857 17:43:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:47.857 17:43:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.395 17:43:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.395 00:32:50.395 real 0m22.827s 00:32:50.395 user 0m39.734s 00:32:50.395 sys 0m8.443s 00:32:50.395 17:43:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.395 17:43:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:50.395 ************************************ 00:32:50.395 END TEST nvmf_interrupt 00:32:50.395 ************************************ 00:32:50.395 00:32:50.395 real 27m25.924s 00:32:50.395 user 56m57.705s 00:32:50.395 sys 9m20.890s 00:32:50.395 17:43:16 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.395 17:43:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:50.395 ************************************ 00:32:50.395 END TEST nvmf_tcp 00:32:50.395 ************************************ 00:32:50.395 17:43:16 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:50.395 17:43:16 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:50.395 17:43:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:50.395 17:43:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.395 17:43:16 -- common/autotest_common.sh@10 -- # set +x 00:32:50.395 ************************************ 00:32:50.395 START TEST spdkcli_nvmf_tcp 00:32:50.395 ************************************ 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:50.395 * Looking for test storage... 00:32:50.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:50.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.395 --rc genhtml_branch_coverage=1 00:32:50.395 --rc genhtml_function_coverage=1 00:32:50.395 --rc genhtml_legend=1 00:32:50.395 --rc geninfo_all_blocks=1 00:32:50.395 --rc geninfo_unexecuted_blocks=1 00:32:50.395 00:32:50.395 ' 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:50.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.395 --rc genhtml_branch_coverage=1 00:32:50.395 --rc genhtml_function_coverage=1 00:32:50.395 --rc genhtml_legend=1 00:32:50.395 --rc geninfo_all_blocks=1 00:32:50.395 --rc geninfo_unexecuted_blocks=1 00:32:50.395 00:32:50.395 ' 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:50.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.395 --rc genhtml_branch_coverage=1 00:32:50.395 --rc genhtml_function_coverage=1 00:32:50.395 --rc genhtml_legend=1 00:32:50.395 --rc geninfo_all_blocks=1 00:32:50.395 --rc geninfo_unexecuted_blocks=1 00:32:50.395 00:32:50.395 ' 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:50.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.395 --rc genhtml_branch_coverage=1 00:32:50.395 --rc genhtml_function_coverage=1 00:32:50.395 --rc genhtml_legend=1 00:32:50.395 --rc geninfo_all_blocks=1 00:32:50.395 --rc geninfo_unexecuted_blocks=1 00:32:50.395 00:32:50.395 ' 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:50.395 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:50.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2142253 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2142253 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2142253 ']' 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.396 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:50.396 [2024-12-09 17:43:16.758487] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:32:50.396 [2024-12-09 17:43:16.758534] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142253 ] 00:32:50.396 [2024-12-09 17:43:16.830545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:50.396 [2024-12-09 17:43:16.870653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.396 [2024-12-09 17:43:16.870654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.655 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:50.655 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:50.655 17:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:50.655 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:50.655 17:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:50.655 17:43:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:50.655 17:43:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:50.655 17:43:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:50.655 17:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:50.655 17:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:50.655 17:43:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:50.655 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:50.655 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:50.655 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:50.655 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:50.655 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:50.655 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:50.655 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:50.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:50.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:50.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:50.655 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:50.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:50.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:50.655 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:50.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:50.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:50.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:50.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:50.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:50.656 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:50.656 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:50.656 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:50.656 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:50.656 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:50.656 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:50.656 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:50.656 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:50.656 ' 00:32:53.188 [2024-12-09 17:43:19.706512] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.563 [2024-12-09 17:43:21.042929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:57.092 [2024-12-09 17:43:23.530540] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:59.620 [2024-12-09 17:43:25.681266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:00.995 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:00.995 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:00.995 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:00.995 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:00.995 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:00.995 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:00.995 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:00.995 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:00.995 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:00.995 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:00.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:00.995 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:00.995 17:43:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:00.995 17:43:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:00.995 17:43:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:00.995 17:43:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:00.995 17:43:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:00.995 17:43:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:00.995 17:43:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:00.995 17:43:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:01.562 17:43:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:01.562 17:43:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:01.562 17:43:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:01.562 17:43:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:01.562 17:43:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:01.562 17:43:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:01.562 17:43:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:01.562 17:43:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:01.562 17:43:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:01.562 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:01.562 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:01.562 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:01.562 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:01.562 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:01.562 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:01.562 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:01.562 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:01.562 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:01.562 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:01.562 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:01.562 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:01.562 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:01.562 ' 00:33:08.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:08.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:08.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:08.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:08.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:08.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:08.124 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:08.124 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:08.124 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:08.124 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:08.124 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:08.124 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:08.124 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:08.124 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2142253 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2142253 ']' 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2142253 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2142253 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2142253' 00:33:08.124 killing process with pid 2142253 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2142253 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2142253 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:08.124 17:43:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2142253 ']' 00:33:08.125 17:43:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2142253 00:33:08.125 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2142253 ']' 00:33:08.125 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2142253 00:33:08.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2142253) - No such process 00:33:08.125 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2142253 is not found' 00:33:08.125 Process with pid 2142253 is not found 00:33:08.125 17:43:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:08.125 17:43:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:08.125 17:43:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:08.125 00:33:08.125 real 0m17.325s 00:33:08.125 user 0m38.164s 00:33:08.125 sys 0m0.810s 00:33:08.125 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.125 17:43:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:08.125 ************************************ 00:33:08.125 END TEST spdkcli_nvmf_tcp 00:33:08.125 ************************************ 00:33:08.125 17:43:33 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:08.125 17:43:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:08.125 17:43:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.125 17:43:33 -- common/autotest_common.sh@10 -- # set +x 00:33:08.125 ************************************ 00:33:08.125 START TEST nvmf_identify_passthru 00:33:08.125 ************************************ 00:33:08.125 17:43:33 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:08.125 * Looking for test storage... 00:33:08.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:08.125 17:43:33 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:08.125 17:43:33 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:33:08.125 17:43:33 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:08.125 17:43:34 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:08.125 17:43:34 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.125 17:43:34 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:08.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.125 --rc genhtml_branch_coverage=1 00:33:08.125 --rc genhtml_function_coverage=1 00:33:08.125 --rc genhtml_legend=1 00:33:08.125 --rc geninfo_all_blocks=1 00:33:08.125 --rc geninfo_unexecuted_blocks=1 00:33:08.125 00:33:08.125 ' 00:33:08.125 17:43:34 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:08.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.125 --rc genhtml_branch_coverage=1 00:33:08.125 --rc genhtml_function_coverage=1 00:33:08.125 --rc genhtml_legend=1 00:33:08.125 --rc geninfo_all_blocks=1 00:33:08.125 --rc geninfo_unexecuted_blocks=1 00:33:08.125 00:33:08.125 ' 00:33:08.125 17:43:34 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:08.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.125 --rc genhtml_branch_coverage=1 00:33:08.125 --rc genhtml_function_coverage=1 00:33:08.125 --rc genhtml_legend=1 00:33:08.125 --rc geninfo_all_blocks=1 00:33:08.125 --rc geninfo_unexecuted_blocks=1 00:33:08.125 00:33:08.125 ' 00:33:08.125 17:43:34 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:08.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.125 --rc genhtml_branch_coverage=1 00:33:08.125 --rc genhtml_function_coverage=1 00:33:08.125 --rc genhtml_legend=1 00:33:08.125 --rc geninfo_all_blocks=1 00:33:08.125 --rc geninfo_unexecuted_blocks=1 00:33:08.125 00:33:08.125 ' 00:33:08.125 17:43:34 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.125 17:43:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.125 17:43:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.125 17:43:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.125 17:43:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:08.125 17:43:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:08.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.125 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.125 17:43:34 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.125 17:43:34 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.126 17:43:34 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.126 17:43:34 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.126 17:43:34 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.126 17:43:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.126 17:43:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.126 17:43:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.126 17:43:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:08.126 17:43:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.126 17:43:34 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:08.126 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:08.126 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.126 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:08.126 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:08.126 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:08.126 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.126 17:43:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:08.126 17:43:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.126 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:08.126 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:08.126 17:43:34 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.126 17:43:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:13.565 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:13.565 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:13.565 Found net devices under 0000:af:00.0: cvl_0_0 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:13.565 Found net devices under 0000:af:00.1: cvl_0_1 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:13.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:33:13.565 00:33:13.565 --- 10.0.0.2 ping statistics --- 00:33:13.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.565 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:33:13.565 00:33:13.565 --- 10.0.0.1 ping statistics --- 00:33:13.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.565 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:13.565 17:43:39 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:13.565 17:43:40 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:13.565 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:13.565 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:13.565 17:43:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:13.565 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:13.565 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:13.565 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:13.565 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:13.565 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:13.565 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:13.565 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:13.565 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:13.565 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:13.824 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:13.824 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:13.824 17:43:40 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:13.824 17:43:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:13.824 17:43:40 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:13.824 17:43:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:13.824 17:43:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:13.824 17:43:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:18.008 17:43:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:33:18.008 17:43:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:18.008 17:43:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:18.008 17:43:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:22.190 17:43:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:22.190 17:43:48 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:22.190 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:22.190 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.190 17:43:48 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:22.190 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:22.190 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.190 17:43:48 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2149419 00:33:22.190 17:43:48 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:22.190 17:43:48 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:22.190 17:43:48 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2149419 00:33:22.190 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2149419 ']' 00:33:22.190 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:22.190 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:22.191 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:22.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:22.191 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:22.191 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.191 [2024-12-09 17:43:48.508002] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:33:22.191 [2024-12-09 17:43:48.508047] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:22.191 [2024-12-09 17:43:48.587675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:22.191 [2024-12-09 17:43:48.628934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.191 [2024-12-09 17:43:48.628973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.191 [2024-12-09 17:43:48.628980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:22.191 [2024-12-09 17:43:48.628986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:22.191 [2024-12-09 17:43:48.628993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.191 [2024-12-09 17:43:48.630407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.191 [2024-12-09 17:43:48.630516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:22.191 [2024-12-09 17:43:48.630626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.191 [2024-12-09 17:43:48.630627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:22.191 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:22.191 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:22.191 17:43:48 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:22.191 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.191 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.191 INFO: Log level set to 20 00:33:22.191 INFO: Requests: 00:33:22.191 { 00:33:22.191 "jsonrpc": "2.0", 00:33:22.191 "method": "nvmf_set_config", 00:33:22.191 "id": 1, 00:33:22.191 "params": { 00:33:22.191 "admin_cmd_passthru": { 00:33:22.191 "identify_ctrlr": true 00:33:22.191 } 00:33:22.191 } 00:33:22.191 } 00:33:22.191 00:33:22.191 INFO: response: 00:33:22.191 { 00:33:22.191 "jsonrpc": "2.0", 00:33:22.191 "id": 1, 00:33:22.191 "result": true 00:33:22.191 } 00:33:22.191 00:33:22.191 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.191 17:43:48 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:22.191 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.191 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.191 INFO: Setting log level to 20 00:33:22.191 INFO: Setting log level to 20 00:33:22.191 INFO: Log level set to 20 00:33:22.191 INFO: Log level set to 20 00:33:22.191 INFO: Requests: 00:33:22.191 { 00:33:22.191 "jsonrpc": "2.0", 00:33:22.191 "method": "framework_start_init", 00:33:22.191 "id": 1 00:33:22.191 } 00:33:22.191 00:33:22.191 INFO: Requests: 00:33:22.191 { 00:33:22.191 "jsonrpc": "2.0", 00:33:22.191 "method": "framework_start_init", 00:33:22.191 "id": 1 00:33:22.191 } 00:33:22.191 00:33:22.449 [2024-12-09 17:43:48.742423] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:22.449 INFO: response: 00:33:22.449 { 00:33:22.449 "jsonrpc": "2.0", 00:33:22.449 "id": 1, 00:33:22.449 "result": true 00:33:22.449 } 00:33:22.449 00:33:22.449 INFO: response: 00:33:22.449 { 00:33:22.449 "jsonrpc": "2.0", 00:33:22.449 "id": 1, 00:33:22.449 "result": true 00:33:22.449 } 00:33:22.449 00:33:22.449 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.449 17:43:48 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:22.449 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.449 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.449 INFO: Setting log level to 40 00:33:22.449 INFO: Setting log level to 40 00:33:22.449 INFO: Setting log level to 40 00:33:22.449 [2024-12-09 17:43:48.755677] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:22.449 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.449 17:43:48 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:22.449 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:22.449 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.449 17:43:48 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:22.449 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.449 17:43:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:25.732 Nvme0n1 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.732 17:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.732 17:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.732 17:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:25.732 [2024-12-09 17:43:51.668583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.732 17:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:25.732 [ 00:33:25.732 { 00:33:25.732 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:25.732 "subtype": "Discovery", 00:33:25.732 "listen_addresses": [], 00:33:25.732 "allow_any_host": true, 00:33:25.732 "hosts": [] 00:33:25.732 }, 00:33:25.732 { 00:33:25.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:25.732 "subtype": "NVMe", 00:33:25.732 "listen_addresses": [ 00:33:25.732 { 00:33:25.732 "trtype": "TCP", 00:33:25.732 "adrfam": "IPv4", 00:33:25.732 "traddr": "10.0.0.2", 00:33:25.732 "trsvcid": "4420" 00:33:25.732 } 00:33:25.732 ], 00:33:25.732 "allow_any_host": true, 00:33:25.732 "hosts": [], 00:33:25.732 "serial_number": "SPDK00000000000001", 00:33:25.732 "model_number": "SPDK bdev Controller", 00:33:25.732 "max_namespaces": 1, 00:33:25.732 "min_cntlid": 1, 00:33:25.732 "max_cntlid": 65519, 00:33:25.732 "namespaces": [ 00:33:25.732 { 00:33:25.732 "nsid": 1, 00:33:25.732 "bdev_name": "Nvme0n1", 00:33:25.732 "name": "Nvme0n1", 00:33:25.732 "nguid": "E2F73A281AB149948F54AB12BA3368A7", 00:33:25.732 "uuid": "e2f73a28-1ab1-4994-8f54-ab12ba3368a7" 00:33:25.732 } 00:33:25.732 ] 00:33:25.732 } 00:33:25.732 ] 00:33:25.732 17:43:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.732 17:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:25.732 17:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:25.732 17:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:25.732 17:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:33:25.732 17:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:25.732 17:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:25.732 17:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:25.732 17:43:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:25.732 17:43:52 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:33:25.732 17:43:52 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:25.732 17:43:52 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.732 17:43:52 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:25.732 17:43:52 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:25.732 17:43:52 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:25.732 17:43:52 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:25.732 17:43:52 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:25.732 17:43:52 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:25.732 17:43:52 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:25.732 17:43:52 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:25.732 rmmod nvme_tcp 00:33:25.732 rmmod nvme_fabrics 00:33:25.732 rmmod nvme_keyring 00:33:25.732 17:43:52 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:25.732 17:43:52 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:25.732 17:43:52 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:25.732 17:43:52 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2149419 ']' 00:33:25.732 17:43:52 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2149419 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2149419 ']' 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2149419 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2149419 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2149419' 00:33:25.732 killing process with pid 2149419 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2149419 00:33:25.732 17:43:52 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2149419 00:33:27.106 17:43:53 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:27.106 17:43:53 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:27.107 17:43:53 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:27.107 17:43:53 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:27.107 17:43:53 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:27.107 17:43:53 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:27.107 17:43:53 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:27.365 17:43:53 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:27.365 17:43:53 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:27.365 17:43:53 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.365 17:43:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:27.365 17:43:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.270 17:43:55 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:29.270 00:33:29.270 real 0m21.820s 00:33:29.270 user 0m26.744s 00:33:29.270 sys 0m6.168s 00:33:29.270 17:43:55 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:29.270 17:43:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:29.270 ************************************ 00:33:29.270 END TEST nvmf_identify_passthru 00:33:29.270 ************************************ 00:33:29.270 17:43:55 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:29.270 17:43:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:29.270 17:43:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:29.270 17:43:55 -- common/autotest_common.sh@10 -- # set +x 00:33:29.270 ************************************ 00:33:29.270 START TEST nvmf_dif 00:33:29.270 ************************************ 00:33:29.270 17:43:55 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:29.530 * Looking for test storage... 00:33:29.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:29.530 17:43:55 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:29.530 17:43:55 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:33:29.530 17:43:55 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:29.530 17:43:55 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:29.530 17:43:55 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:29.530 17:43:55 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:29.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.530 --rc genhtml_branch_coverage=1 00:33:29.530 --rc genhtml_function_coverage=1 00:33:29.530 --rc genhtml_legend=1 00:33:29.530 --rc geninfo_all_blocks=1 00:33:29.530 --rc geninfo_unexecuted_blocks=1 00:33:29.530 00:33:29.530 ' 00:33:29.530 17:43:55 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:29.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.530 --rc genhtml_branch_coverage=1 00:33:29.530 --rc genhtml_function_coverage=1 00:33:29.530 --rc genhtml_legend=1 00:33:29.530 --rc geninfo_all_blocks=1 00:33:29.530 --rc geninfo_unexecuted_blocks=1 00:33:29.530 00:33:29.530 ' 00:33:29.530 17:43:55 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:29.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.530 --rc genhtml_branch_coverage=1 00:33:29.530 --rc genhtml_function_coverage=1 00:33:29.530 --rc genhtml_legend=1 00:33:29.530 --rc geninfo_all_blocks=1 00:33:29.530 --rc geninfo_unexecuted_blocks=1 00:33:29.530 00:33:29.530 ' 00:33:29.530 17:43:55 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:29.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.530 --rc genhtml_branch_coverage=1 00:33:29.530 --rc genhtml_function_coverage=1 00:33:29.530 --rc genhtml_legend=1 00:33:29.530 --rc geninfo_all_blocks=1 00:33:29.530 --rc geninfo_unexecuted_blocks=1 00:33:29.530 00:33:29.530 ' 00:33:29.530 17:43:55 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.530 17:43:55 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.530 17:43:55 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.530 17:43:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.530 17:43:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.531 17:43:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.531 17:43:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:29.531 17:43:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:29.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:29.531 17:43:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:29.531 17:43:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:29.531 17:43:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:29.531 17:43:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:29.531 17:43:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.531 17:43:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:29.531 17:43:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:29.531 17:43:55 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:29.531 17:43:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:36.108 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:36.108 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:36.108 Found net devices under 0000:af:00.0: cvl_0_0 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:36.108 Found net devices under 0000:af:00.1: cvl_0_1 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:36.108 17:44:01 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:36.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:33:36.108 00:33:36.108 --- 10.0.0.2 ping statistics --- 00:33:36.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.109 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:33:36.109 17:44:01 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:33:36.109 00:33:36.109 --- 10.0.0.1 ping statistics --- 00:33:36.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.109 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:33:36.109 17:44:01 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.109 17:44:01 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:36.109 17:44:01 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:36.109 17:44:01 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:38.014 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:38.014 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:38.014 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:38.014 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:38.014 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:38.014 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:38.014 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:38.014 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:38.015 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:38.015 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:38.015 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:38.015 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:38.015 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:38.015 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:38.015 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:38.015 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:38.015 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:38.274 17:44:04 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.274 17:44:04 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:38.274 17:44:04 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:38.274 17:44:04 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.274 17:44:04 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:38.274 17:44:04 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:38.274 17:44:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:38.274 17:44:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:38.274 17:44:04 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:38.274 17:44:04 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:38.274 17:44:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:38.274 17:44:04 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2154795 00:33:38.274 17:44:04 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2154795 00:33:38.274 17:44:04 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:38.274 17:44:04 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2154795 ']' 00:33:38.274 17:44:04 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.274 17:44:04 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.274 17:44:04 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.274 17:44:04 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.274 17:44:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:38.274 [2024-12-09 17:44:04.741359] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:33:38.274 [2024-12-09 17:44:04.741399] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.533 [2024-12-09 17:44:04.818073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.533 [2024-12-09 17:44:04.858578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:38.533 [2024-12-09 17:44:04.858608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:38.533 [2024-12-09 17:44:04.858615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:38.533 [2024-12-09 17:44:04.858621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:38.533 [2024-12-09 17:44:04.858627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:38.534 [2024-12-09 17:44:04.859098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.534 17:44:04 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:38.534 17:44:04 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:38.534 17:44:04 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:38.534 17:44:04 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:38.534 17:44:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:38.534 17:44:04 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:38.534 17:44:04 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:38.534 17:44:04 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:38.534 17:44:04 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.534 17:44:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:38.534 [2024-12-09 17:44:04.998583] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.534 17:44:05 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.534 17:44:05 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:38.534 17:44:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:38.534 17:44:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.534 17:44:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:38.534 ************************************ 00:33:38.534 START TEST fio_dif_1_default 00:33:38.534 ************************************ 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:38.534 bdev_null0 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:38.534 [2024-12-09 17:44:05.066869] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.534 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:38.791 { 00:33:38.791 "params": { 00:33:38.791 "name": "Nvme$subsystem", 00:33:38.791 "trtype": "$TEST_TRANSPORT", 00:33:38.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:38.791 "adrfam": "ipv4", 00:33:38.791 "trsvcid": "$NVMF_PORT", 00:33:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:38.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:38.791 "hdgst": ${hdgst:-false}, 00:33:38.791 "ddgst": ${ddgst:-false} 00:33:38.791 }, 00:33:38.791 "method": "bdev_nvme_attach_controller" 00:33:38.791 } 00:33:38.791 EOF 00:33:38.791 )") 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:38.791 17:44:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:38.791 "params": { 00:33:38.791 "name": "Nvme0", 00:33:38.791 "trtype": "tcp", 00:33:38.791 "traddr": "10.0.0.2", 00:33:38.791 "adrfam": "ipv4", 00:33:38.791 "trsvcid": "4420", 00:33:38.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:38.791 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:38.791 "hdgst": false, 00:33:38.791 "ddgst": false 00:33:38.791 }, 00:33:38.792 "method": "bdev_nvme_attach_controller" 00:33:38.792 }' 00:33:38.792 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:38.792 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:38.792 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:38.792 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:38.792 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:38.792 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:38.792 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:38.792 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:38.792 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:38.792 17:44:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.049 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:39.049 fio-3.35 00:33:39.049 Starting 1 thread 00:33:51.255 00:33:51.255 filename0: (groupid=0, jobs=1): err= 0: pid=2155156: Mon Dec 9 17:44:16 2024 00:33:51.255 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:33:51.255 slat (nsec): min=5797, max=27629, avg=6165.97, stdev=1204.43 00:33:51.255 clat (usec): min=40791, max=45515, avg=41005.50, stdev=300.32 00:33:51.255 lat (usec): min=40797, max=45543, avg=41011.67, stdev=300.83 00:33:51.255 clat percentiles (usec): 00:33:51.255 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:51.255 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:51.255 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:51.255 | 99.00th=[41157], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:33:51.255 | 99.99th=[45351] 00:33:51.255 bw ( KiB/s): min= 384, max= 416, per=99.48%, avg=388.80, stdev=11.72, samples=20 00:33:51.255 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:51.255 lat (msec) : 50=100.00% 00:33:51.255 cpu : usr=92.37%, sys=7.39%, ctx=11, majf=0, minf=0 00:33:51.255 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:51.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.255 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.255 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:51.255 00:33:51.255 Run status group 0 (all jobs): 00:33:51.255 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10010-10010msec 00:33:51.255 17:44:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:51.255 17:44:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:51.255 17:44:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.255 17:44:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:51.255 17:44:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.256 00:33:51.256 real 0m11.194s 00:33:51.256 user 0m16.303s 00:33:51.256 sys 0m1.047s 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:51.256 ************************************ 00:33:51.256 END TEST fio_dif_1_default 00:33:51.256 ************************************ 00:33:51.256 17:44:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:51.256 17:44:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:51.256 17:44:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.256 17:44:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:51.256 ************************************ 00:33:51.256 START TEST fio_dif_1_multi_subsystems 00:33:51.256 ************************************ 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.256 bdev_null0 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.256 [2024-12-09 17:44:16.322540] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.256 bdev_null1 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:51.256 { 00:33:51.256 "params": { 00:33:51.256 "name": "Nvme$subsystem", 00:33:51.256 "trtype": "$TEST_TRANSPORT", 00:33:51.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.256 "adrfam": "ipv4", 00:33:51.256 "trsvcid": "$NVMF_PORT", 00:33:51.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.256 "hdgst": ${hdgst:-false}, 00:33:51.256 "ddgst": ${ddgst:-false} 00:33:51.256 }, 00:33:51.256 "method": "bdev_nvme_attach_controller" 00:33:51.256 } 00:33:51.256 EOF 00:33:51.256 )") 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:51.256 { 00:33:51.256 "params": { 00:33:51.256 "name": "Nvme$subsystem", 00:33:51.256 "trtype": "$TEST_TRANSPORT", 00:33:51.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.256 "adrfam": "ipv4", 00:33:51.256 "trsvcid": "$NVMF_PORT", 00:33:51.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.256 "hdgst": ${hdgst:-false}, 00:33:51.256 "ddgst": ${ddgst:-false} 00:33:51.256 }, 00:33:51.256 "method": "bdev_nvme_attach_controller" 00:33:51.256 } 00:33:51.256 EOF 00:33:51.256 )") 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:51.256 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:51.256 "params": { 00:33:51.256 "name": "Nvme0", 00:33:51.256 "trtype": "tcp", 00:33:51.256 "traddr": "10.0.0.2", 00:33:51.256 "adrfam": "ipv4", 00:33:51.256 "trsvcid": "4420", 00:33:51.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:51.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:51.256 "hdgst": false, 00:33:51.256 "ddgst": false 00:33:51.256 }, 00:33:51.256 "method": "bdev_nvme_attach_controller" 00:33:51.256 },{ 00:33:51.256 "params": { 00:33:51.256 "name": "Nvme1", 00:33:51.256 "trtype": "tcp", 00:33:51.257 "traddr": "10.0.0.2", 00:33:51.257 "adrfam": "ipv4", 00:33:51.257 "trsvcid": "4420", 00:33:51.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:51.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:51.257 "hdgst": false, 00:33:51.257 "ddgst": false 00:33:51.257 }, 00:33:51.257 "method": "bdev_nvme_attach_controller" 00:33:51.257 }' 00:33:51.257 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:51.257 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:51.257 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.257 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.257 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:51.257 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.257 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:51.257 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:51.257 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:51.257 17:44:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.257 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:51.257 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:51.257 fio-3.35 00:33:51.257 Starting 2 threads 00:34:01.228 00:34:01.228 filename0: (groupid=0, jobs=1): err= 0: pid=2157075: Mon Dec 9 17:44:27 2024 00:34:01.228 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:34:01.228 slat (nsec): min=5888, max=23480, avg=7816.75, stdev=2662.59 00:34:01.228 clat (usec): min=40799, max=42031, avg=40989.83, stdev=118.84 00:34:01.228 lat (usec): min=40812, max=42043, avg=40997.65, stdev=119.22 00:34:01.228 clat percentiles (usec): 00:34:01.228 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:01.228 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:01.228 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:01.228 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:01.228 | 99.99th=[42206] 00:34:01.228 bw ( KiB/s): min= 384, max= 416, per=49.65%, avg=388.80, stdev=11.72, samples=20 00:34:01.228 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:01.228 lat (msec) : 50=100.00% 00:34:01.228 cpu : usr=96.77%, sys=2.98%, ctx=17, majf=0, minf=0 00:34:01.228 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.228 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.228 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:01.228 filename1: (groupid=0, jobs=1): err= 0: pid=2157076: Mon Dec 9 17:44:27 2024 00:34:01.228 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10011msec) 00:34:01.228 slat (nsec): min=5902, max=25304, avg=7833.22, stdev=2731.21 00:34:01.228 clat (usec): min=433, max=42058, avg=40834.46, stdev=2591.36 00:34:01.228 lat (usec): min=439, max=42070, avg=40842.30, stdev=2591.40 00:34:01.228 clat percentiles (usec): 00:34:01.228 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:01.228 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:01.228 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:01.228 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:01.228 | 99.99th=[42206] 00:34:01.228 bw ( KiB/s): min= 384, max= 416, per=49.90%, avg=390.40, stdev=13.13, samples=20 00:34:01.228 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:34:01.228 lat (usec) : 500=0.41% 00:34:01.228 lat (msec) : 50=99.59% 00:34:01.228 cpu : usr=96.91%, sys=2.84%, ctx=13, majf=0, minf=9 00:34:01.228 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.228 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.228 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:01.228 00:34:01.228 Run status group 0 (all jobs): 00:34:01.228 READ: bw=782KiB/s (800kB/s), 390KiB/s-392KiB/s (399kB/s-401kB/s), io=7824KiB (8012kB), run=10008-10011msec 00:34:01.228 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.229 00:34:01.229 real 0m11.436s 00:34:01.229 user 0m26.086s 00:34:01.229 sys 0m0.894s 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.229 17:44:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.229 ************************************ 00:34:01.229 END TEST fio_dif_1_multi_subsystems 00:34:01.229 ************************************ 00:34:01.229 17:44:27 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:01.229 17:44:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:01.229 17:44:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.229 17:44:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:01.487 ************************************ 00:34:01.487 START TEST fio_dif_rand_params 00:34:01.487 ************************************ 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.487 bdev_null0 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.487 [2024-12-09 17:44:27.831433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:01.487 17:44:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:01.487 { 00:34:01.487 "params": { 00:34:01.487 "name": "Nvme$subsystem", 00:34:01.487 "trtype": "$TEST_TRANSPORT", 00:34:01.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:01.487 "adrfam": "ipv4", 00:34:01.487 "trsvcid": "$NVMF_PORT", 00:34:01.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:01.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:01.487 "hdgst": ${hdgst:-false}, 00:34:01.487 "ddgst": ${ddgst:-false} 00:34:01.487 }, 00:34:01.487 "method": "bdev_nvme_attach_controller" 00:34:01.487 } 00:34:01.487 EOF 00:34:01.487 )") 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:01.488 "params": { 00:34:01.488 "name": "Nvme0", 00:34:01.488 "trtype": "tcp", 00:34:01.488 "traddr": "10.0.0.2", 00:34:01.488 "adrfam": "ipv4", 00:34:01.488 "trsvcid": "4420", 00:34:01.488 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:01.488 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:01.488 "hdgst": false, 00:34:01.488 "ddgst": false 00:34:01.488 }, 00:34:01.488 "method": "bdev_nvme_attach_controller" 00:34:01.488 }' 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:01.488 17:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.746 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:01.746 ... 00:34:01.746 fio-3.35 00:34:01.746 Starting 3 threads 00:34:08.309 00:34:08.309 filename0: (groupid=0, jobs=1): err= 0: pid=2158992: Mon Dec 9 17:44:33 2024 00:34:08.309 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(178MiB/5043msec) 00:34:08.309 slat (nsec): min=6149, max=32011, avg=10826.11, stdev=2676.22 00:34:08.309 clat (usec): min=2977, max=86454, avg=10630.28, stdev=10833.80 00:34:08.309 lat (usec): min=2985, max=86461, avg=10641.11, stdev=10833.76 00:34:08.309 clat percentiles (usec): 00:34:08.309 | 1.00th=[ 3556], 5.00th=[ 3949], 10.00th=[ 5342], 20.00th=[ 6259], 00:34:08.309 | 30.00th=[ 7177], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8586], 00:34:08.309 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[10028], 95.00th=[46924], 00:34:08.309 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51643], 99.95th=[86508], 00:34:08.309 | 99.99th=[86508] 00:34:08.309 bw ( KiB/s): min=21504, max=43776, per=33.76%, avg=36300.80, stdev=6725.44, samples=10 00:34:08.309 iops : min= 168, max= 342, avg=283.60, stdev=52.54, samples=10 00:34:08.309 lat (msec) : 4=5.35%, 10=84.10%, 20=3.24%, 50=5.70%, 100=1.62% 00:34:08.309 cpu : usr=96.67%, sys=3.01%, ctx=9, majf=0, minf=9 00:34:08.309 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.309 issued rwts: total=1421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.309 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:08.309 filename0: (groupid=0, jobs=1): err= 0: pid=2158993: Mon Dec 9 17:44:33 2024 00:34:08.309 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(177MiB/5047msec) 00:34:08.309 slat (nsec): min=6157, max=36811, avg=11052.56, stdev=3044.29 00:34:08.309 clat (usec): min=3252, max=87504, avg=10640.12, stdev=9338.36 00:34:08.309 lat (usec): min=3259, max=87512, avg=10651.17, stdev=9338.29 00:34:08.309 clat percentiles (usec): 00:34:08.309 | 1.00th=[ 3621], 5.00th=[ 3916], 10.00th=[ 4817], 20.00th=[ 6325], 00:34:08.309 | 30.00th=[ 6849], 40.00th=[ 8029], 50.00th=[ 9241], 60.00th=[ 9896], 00:34:08.309 | 70.00th=[10552], 80.00th=[11207], 90.00th=[11994], 95.00th=[44827], 00:34:08.309 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51643], 99.95th=[87557], 00:34:08.309 | 99.99th=[87557] 00:34:08.309 bw ( KiB/s): min=28672, max=46336, per=33.69%, avg=36224.00, stdev=4926.11, samples=10 00:34:08.309 iops : min= 224, max= 362, avg=283.00, stdev=38.49, samples=10 00:34:08.309 lat (msec) : 4=5.86%, 10=55.47%, 20=33.31%, 50=3.88%, 100=1.48% 00:34:08.309 cpu : usr=92.19%, sys=5.39%, ctx=363, majf=0, minf=9 00:34:08.309 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.309 issued rwts: total=1417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.309 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:08.309 filename0: (groupid=0, jobs=1): err= 0: pid=2158994: Mon Dec 9 17:44:33 2024 00:34:08.309 read: IOPS=280, BW=35.0MiB/s (36.7MB/s)(175MiB/5002msec) 00:34:08.309 slat (nsec): min=6159, max=27867, avg=11025.75, stdev=2350.99 00:34:08.309 clat (usec): min=2982, max=51707, avg=10689.76, stdev=9845.62 00:34:08.309 lat (usec): min=2989, max=51719, avg=10700.79, stdev=9845.50 00:34:08.309 clat percentiles (usec): 00:34:08.309 | 1.00th=[ 3654], 5.00th=[ 4621], 10.00th=[ 5866], 20.00th=[ 6456], 00:34:08.309 | 30.00th=[ 7242], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9241], 00:34:08.309 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10814], 95.00th=[46400], 00:34:08.309 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:34:08.309 | 99.99th=[51643] 00:34:08.309 bw ( KiB/s): min=17152, max=43264, per=32.83%, avg=35299.56, stdev=8491.95, samples=9 00:34:08.309 iops : min= 134, max= 338, avg=275.78, stdev=66.34, samples=9 00:34:08.309 lat (msec) : 4=2.50%, 10=78.17%, 20=13.12%, 50=5.28%, 100=0.93% 00:34:08.309 cpu : usr=92.82%, sys=4.96%, ctx=203, majf=0, minf=9 00:34:08.309 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.309 issued rwts: total=1402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.309 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:08.309 00:34:08.309 Run status group 0 (all jobs): 00:34:08.309 READ: bw=105MiB/s (110MB/s), 35.0MiB/s-35.2MiB/s (36.7MB/s-36.9MB/s), io=530MiB (556MB), run=5002-5047msec 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.309 bdev_null0 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.309 [2024-12-09 17:44:33.990086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.309 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:08.310 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:08.310 17:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:08.310 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.310 17:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.310 bdev_null1 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.310 bdev_null2 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.310 { 00:34:08.310 "params": { 00:34:08.310 "name": "Nvme$subsystem", 00:34:08.310 "trtype": "$TEST_TRANSPORT", 00:34:08.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.310 "adrfam": "ipv4", 00:34:08.310 "trsvcid": "$NVMF_PORT", 00:34:08.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.310 "hdgst": ${hdgst:-false}, 00:34:08.310 "ddgst": ${ddgst:-false} 00:34:08.310 }, 00:34:08.310 "method": "bdev_nvme_attach_controller" 00:34:08.310 } 00:34:08.310 EOF 00:34:08.310 )") 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.310 { 00:34:08.310 "params": { 00:34:08.310 "name": "Nvme$subsystem", 00:34:08.310 "trtype": "$TEST_TRANSPORT", 00:34:08.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.310 "adrfam": "ipv4", 00:34:08.310 "trsvcid": "$NVMF_PORT", 00:34:08.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.310 "hdgst": ${hdgst:-false}, 00:34:08.310 "ddgst": ${ddgst:-false} 00:34:08.310 }, 00:34:08.310 "method": "bdev_nvme_attach_controller" 00:34:08.310 } 00:34:08.310 EOF 00:34:08.310 )") 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.310 { 00:34:08.310 "params": { 00:34:08.310 "name": "Nvme$subsystem", 00:34:08.310 "trtype": "$TEST_TRANSPORT", 00:34:08.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.310 "adrfam": "ipv4", 00:34:08.310 "trsvcid": "$NVMF_PORT", 00:34:08.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.310 "hdgst": ${hdgst:-false}, 00:34:08.310 "ddgst": ${ddgst:-false} 00:34:08.310 }, 00:34:08.310 "method": "bdev_nvme_attach_controller" 00:34:08.310 } 00:34:08.310 EOF 00:34:08.310 )") 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:08.310 "params": { 00:34:08.310 "name": "Nvme0", 00:34:08.310 "trtype": "tcp", 00:34:08.310 "traddr": "10.0.0.2", 00:34:08.310 "adrfam": "ipv4", 00:34:08.310 "trsvcid": "4420", 00:34:08.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:08.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:08.310 "hdgst": false, 00:34:08.310 "ddgst": false 00:34:08.310 }, 00:34:08.310 "method": "bdev_nvme_attach_controller" 00:34:08.310 },{ 00:34:08.310 "params": { 00:34:08.310 "name": "Nvme1", 00:34:08.310 "trtype": "tcp", 00:34:08.310 "traddr": "10.0.0.2", 00:34:08.310 "adrfam": "ipv4", 00:34:08.310 "trsvcid": "4420", 00:34:08.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:08.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:08.310 "hdgst": false, 00:34:08.310 "ddgst": false 00:34:08.310 }, 00:34:08.310 "method": "bdev_nvme_attach_controller" 00:34:08.310 },{ 00:34:08.310 "params": { 00:34:08.310 "name": "Nvme2", 00:34:08.310 "trtype": "tcp", 00:34:08.310 "traddr": "10.0.0.2", 00:34:08.310 "adrfam": "ipv4", 00:34:08.310 "trsvcid": "4420", 00:34:08.310 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:08.310 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:08.310 "hdgst": false, 00:34:08.310 "ddgst": false 00:34:08.310 }, 00:34:08.310 "method": "bdev_nvme_attach_controller" 00:34:08.310 }' 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:08.310 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.311 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.311 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:08.311 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:08.311 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:08.311 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:08.311 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:08.311 17:44:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.311 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:08.311 ... 00:34:08.311 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:08.311 ... 00:34:08.311 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:08.311 ... 00:34:08.311 fio-3.35 00:34:08.311 Starting 24 threads 00:34:20.550 00:34:20.550 filename0: (groupid=0, jobs=1): err= 0: pid=2160195: Mon Dec 9 17:44:45 2024 00:34:20.550 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10005msec) 00:34:20.550 slat (nsec): min=7659, max=47726, avg=18723.08, stdev=7592.08 00:34:20.550 clat (usec): min=11790, max=39543, avg=30446.99, stdev=1346.52 00:34:20.550 lat (usec): min=11801, max=39571, avg=30465.71, stdev=1346.21 00:34:20.550 clat percentiles (usec): 00:34:20.550 | 1.00th=[29492], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:20.550 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.550 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:34:20.550 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:34:20.550 | 99.99th=[39584] 00:34:20.550 bw ( KiB/s): min= 2048, max= 2176, per=4.20%, avg=2086.40, stdev=60.18, samples=20 00:34:20.550 iops : min= 512, max= 544, avg=521.60, stdev=15.05, samples=20 00:34:20.550 lat (msec) : 20=0.65%, 50=99.35% 00:34:20.550 cpu : usr=98.51%, sys=1.10%, ctx=13, majf=0, minf=11 00:34:20.550 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.550 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.550 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.550 filename0: (groupid=0, jobs=1): err= 0: pid=2160196: Mon Dec 9 17:44:45 2024 00:34:20.550 read: IOPS=519, BW=2079KiB/s (2128kB/s)(20.3MiB/10007msec) 00:34:20.550 slat (nsec): min=8363, max=41216, avg=18808.37, stdev=5146.37 00:34:20.550 clat (usec): min=20704, max=51277, avg=30626.29, stdev=1209.16 00:34:20.550 lat (usec): min=20717, max=51294, avg=30645.10, stdev=1209.13 00:34:20.550 clat percentiles (usec): 00:34:20.550 | 1.00th=[30016], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:20.550 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.550 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:34:20.550 | 99.00th=[31589], 99.50th=[34866], 99.90th=[51119], 99.95th=[51119], 00:34:20.550 | 99.99th=[51119] 00:34:20.550 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2073.60, stdev=66.96, samples=20 00:34:20.550 iops : min= 480, max= 544, avg=518.40, stdev=16.74, samples=20 00:34:20.550 lat (msec) : 50=99.69%, 100=0.31% 00:34:20.550 cpu : usr=98.47%, sys=1.14%, ctx=18, majf=0, minf=9 00:34:20.550 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.550 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.550 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.550 filename0: (groupid=0, jobs=1): err= 0: pid=2160197: Mon Dec 9 17:44:45 2024 00:34:20.550 read: IOPS=516, BW=2067KiB/s (2117kB/s)(20.3MiB/10062msec) 00:34:20.550 slat (nsec): min=6277, max=54499, avg=23033.36, stdev=7210.14 00:34:20.550 clat (usec): min=29859, max=97531, avg=30734.83, stdev=3648.57 00:34:20.550 lat (usec): min=29874, max=97547, avg=30757.86, stdev=3648.36 00:34:20.550 clat percentiles (usec): 00:34:20.550 | 1.00th=[30016], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:20.550 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.550 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:20.550 | 99.00th=[31327], 99.50th=[41681], 99.90th=[94897], 99.95th=[94897], 00:34:20.550 | 99.99th=[98042] 00:34:20.550 bw ( KiB/s): min= 1904, max= 2176, per=4.17%, avg=2072.80, stdev=80.50, samples=20 00:34:20.550 iops : min= 476, max= 544, avg=518.20, stdev=20.12, samples=20 00:34:20.550 lat (msec) : 50=99.69%, 100=0.31% 00:34:20.550 cpu : usr=98.42%, sys=1.18%, ctx=11, majf=0, minf=9 00:34:20.550 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.550 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.550 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.550 filename0: (groupid=0, jobs=1): err= 0: pid=2160198: Mon Dec 9 17:44:45 2024 00:34:20.550 read: IOPS=519, BW=2078KiB/s (2128kB/s)(20.4MiB/10072msec) 00:34:20.550 slat (nsec): min=7536, max=93018, avg=13316.43, stdev=11027.28 00:34:20.550 clat (usec): min=13997, max=93253, avg=30690.53, stdev=3663.86 00:34:20.550 lat (usec): min=14008, max=93267, avg=30703.85, stdev=3663.37 00:34:20.550 clat percentiles (usec): 00:34:20.550 | 1.00th=[29754], 5.00th=[30278], 10.00th=[30278], 20.00th=[30540], 00:34:20.550 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.550 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:34:20.550 | 99.00th=[31589], 99.50th=[31851], 99.90th=[92799], 99.95th=[92799], 00:34:20.550 | 99.99th=[92799] 00:34:20.550 bw ( KiB/s): min= 2048, max= 2176, per=4.20%, avg=2086.40, stdev=60.18, samples=20 00:34:20.550 iops : min= 512, max= 544, avg=521.60, stdev=15.05, samples=20 00:34:20.550 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:34:20.550 cpu : usr=98.46%, sys=1.13%, ctx=15, majf=0, minf=9 00:34:20.550 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.550 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.550 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.551 filename0: (groupid=0, jobs=1): err= 0: pid=2160199: Mon Dec 9 17:44:45 2024 00:34:20.551 read: IOPS=517, BW=2070KiB/s (2120kB/s)(20.4MiB/10077msec) 00:34:20.551 slat (nsec): min=4961, max=51891, avg=23471.70, stdev=7341.68 00:34:20.551 clat (usec): min=17550, max=94900, avg=30700.21, stdev=3788.79 00:34:20.551 lat (usec): min=17560, max=94915, avg=30723.68, stdev=3788.69 00:34:20.551 clat percentiles (usec): 00:34:20.551 | 1.00th=[30016], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:20.551 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.551 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:20.551 | 99.00th=[36439], 99.50th=[42730], 99.90th=[94897], 99.95th=[94897], 00:34:20.551 | 99.99th=[94897] 00:34:20.551 bw ( KiB/s): min= 2032, max= 2176, per=4.19%, avg=2080.00, stdev=55.43, samples=20 00:34:20.551 iops : min= 508, max= 544, avg=520.00, stdev=13.86, samples=20 00:34:20.551 lat (msec) : 20=0.46%, 50=99.23%, 100=0.31% 00:34:20.551 cpu : usr=98.28%, sys=1.31%, ctx=13, majf=0, minf=9 00:34:20.551 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:34:20.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.551 filename0: (groupid=0, jobs=1): err= 0: pid=2160200: Mon Dec 9 17:44:45 2024 00:34:20.551 read: IOPS=516, BW=2065KiB/s (2115kB/s)(20.2MiB/10040msec) 00:34:20.551 slat (nsec): min=4548, max=79831, avg=35102.92, stdev=15295.89 00:34:20.551 clat (usec): min=22884, max=93807, avg=30663.13, stdev=3759.02 00:34:20.551 lat (usec): min=22912, max=93834, avg=30698.23, stdev=3758.75 00:34:20.551 clat percentiles (usec): 00:34:20.551 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:34:20.551 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:34:20.551 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:34:20.551 | 99.00th=[31589], 99.50th=[53740], 99.90th=[93848], 99.95th=[93848], 00:34:20.551 | 99.99th=[93848] 00:34:20.551 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2067.20, stdev=75.15, samples=20 00:34:20.551 iops : min= 480, max= 544, avg=516.80, stdev=18.79, samples=20 00:34:20.551 lat (msec) : 50=99.38%, 100=0.62% 00:34:20.551 cpu : usr=98.29%, sys=1.16%, ctx=150, majf=0, minf=9 00:34:20.551 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.551 filename0: (groupid=0, jobs=1): err= 0: pid=2160201: Mon Dec 9 17:44:45 2024 00:34:20.551 read: IOPS=516, BW=2066KiB/s (2115kB/s)(20.2MiB/10039msec) 00:34:20.551 slat (usec): min=5, max=100, avg=46.92, stdev=21.78 00:34:20.551 clat (usec): min=29520, max=93670, avg=30569.45, stdev=3716.83 00:34:20.551 lat (usec): min=29536, max=93718, avg=30616.37, stdev=3716.41 00:34:20.551 clat percentiles (usec): 00:34:20.551 | 1.00th=[29754], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:20.551 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:20.551 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:34:20.551 | 99.00th=[31589], 99.50th=[52691], 99.90th=[93848], 99.95th=[93848], 00:34:20.551 | 99.99th=[93848] 00:34:20.551 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2067.20, stdev=75.15, samples=20 00:34:20.551 iops : min= 480, max= 544, avg=516.80, stdev=18.79, samples=20 00:34:20.551 lat (msec) : 50=99.38%, 100=0.62% 00:34:20.551 cpu : usr=98.56%, sys=1.03%, ctx=16, majf=0, minf=9 00:34:20.551 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.551 filename0: (groupid=0, jobs=1): err= 0: pid=2160203: Mon Dec 9 17:44:45 2024 00:34:20.551 read: IOPS=519, BW=2078KiB/s (2127kB/s)(20.4MiB/10073msec) 00:34:20.551 slat (nsec): min=7762, max=91197, avg=31993.11, stdev=16352.03 00:34:20.551 clat (usec): min=14121, max=93056, avg=30549.99, stdev=3641.09 00:34:20.551 lat (usec): min=14136, max=93085, avg=30581.98, stdev=3641.57 00:34:20.551 clat percentiles (usec): 00:34:20.551 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:34:20.551 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:34:20.551 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:20.551 | 99.00th=[31327], 99.50th=[31589], 99.90th=[92799], 99.95th=[92799], 00:34:20.551 | 99.99th=[92799] 00:34:20.551 bw ( KiB/s): min= 2048, max= 2176, per=4.20%, avg=2086.40, stdev=60.18, samples=20 00:34:20.551 iops : min= 512, max= 544, avg=521.60, stdev=15.05, samples=20 00:34:20.551 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:34:20.551 cpu : usr=98.50%, sys=0.98%, ctx=81, majf=0, minf=9 00:34:20.551 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.551 filename1: (groupid=0, jobs=1): err= 0: pid=2160204: Mon Dec 9 17:44:45 2024 00:34:20.551 read: IOPS=516, BW=2067KiB/s (2117kB/s)(20.3MiB/10061msec) 00:34:20.551 slat (nsec): min=4663, max=49635, avg=22464.28, stdev=7757.52 00:34:20.551 clat (usec): min=18545, max=97614, avg=30730.68, stdev=3660.49 00:34:20.551 lat (usec): min=18558, max=97637, avg=30753.15, stdev=3660.33 00:34:20.551 clat percentiles (usec): 00:34:20.551 | 1.00th=[30016], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:20.551 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.551 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:20.551 | 99.00th=[31327], 99.50th=[40633], 99.90th=[94897], 99.95th=[94897], 00:34:20.551 | 99.99th=[98042] 00:34:20.551 bw ( KiB/s): min= 1904, max= 2176, per=4.17%, avg=2072.95, stdev=80.20, samples=20 00:34:20.551 iops : min= 476, max= 544, avg=518.20, stdev=20.12, samples=20 00:34:20.551 lat (msec) : 20=0.04%, 50=99.65%, 100=0.31% 00:34:20.551 cpu : usr=98.70%, sys=0.88%, ctx=13, majf=0, minf=9 00:34:20.551 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.551 filename1: (groupid=0, jobs=1): err= 0: pid=2160205: Mon Dec 9 17:44:45 2024 00:34:20.551 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.4MiB/10002msec) 00:34:20.551 slat (usec): min=8, max=220, avg=21.80, stdev= 7.79 00:34:20.551 clat (usec): min=12922, max=34869, avg=30405.55, stdev=1341.58 00:34:20.551 lat (usec): min=12931, max=34905, avg=30427.34, stdev=1341.89 00:34:20.551 clat percentiles (usec): 00:34:20.551 | 1.00th=[26084], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:20.551 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.551 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:20.551 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:34:20.551 | 99.99th=[34866] 00:34:20.551 bw ( KiB/s): min= 2048, max= 2176, per=4.21%, avg=2088.42, stdev=61.13, samples=19 00:34:20.551 iops : min= 512, max= 544, avg=522.11, stdev=15.28, samples=19 00:34:20.551 lat (msec) : 20=0.65%, 50=99.35% 00:34:20.551 cpu : usr=98.77%, sys=0.83%, ctx=10, majf=0, minf=9 00:34:20.551 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.551 filename1: (groupid=0, jobs=1): err= 0: pid=2160206: Mon Dec 9 17:44:45 2024 00:34:20.551 read: IOPS=517, BW=2068KiB/s (2118kB/s)(20.3MiB/10057msec) 00:34:20.551 slat (nsec): min=6308, max=40068, avg=18477.55, stdev=5428.14 00:34:20.551 clat (usec): min=29753, max=64715, avg=30770.89, stdev=2446.93 00:34:20.551 lat (usec): min=29766, max=64734, avg=30789.36, stdev=2446.73 00:34:20.551 clat percentiles (usec): 00:34:20.551 | 1.00th=[30016], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:20.551 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.551 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:20.551 | 99.00th=[31589], 99.50th=[51119], 99.90th=[64750], 99.95th=[64750], 00:34:20.551 | 99.99th=[64750] 00:34:20.551 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2073.60, stdev=66.96, samples=20 00:34:20.551 iops : min= 480, max= 544, avg=518.40, stdev=16.74, samples=20 00:34:20.551 lat (msec) : 50=99.38%, 100=0.62% 00:34:20.551 cpu : usr=98.53%, sys=1.07%, ctx=12, majf=0, minf=9 00:34:20.551 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.551 filename1: (groupid=0, jobs=1): err= 0: pid=2160207: Mon Dec 9 17:44:45 2024 00:34:20.551 read: IOPS=517, BW=2070KiB/s (2120kB/s)(20.4MiB/10077msec) 00:34:20.551 slat (nsec): min=4610, max=50260, avg=22831.67, stdev=7223.28 00:34:20.551 clat (usec): min=22781, max=95050, avg=30696.79, stdev=3609.72 00:34:20.551 lat (usec): min=22789, max=95082, avg=30719.62, stdev=3609.66 00:34:20.551 clat percentiles (usec): 00:34:20.551 | 1.00th=[30016], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:20.551 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.551 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:20.551 | 99.00th=[31327], 99.50th=[36963], 99.90th=[94897], 99.95th=[94897], 00:34:20.551 | 99.99th=[94897] 00:34:20.551 bw ( KiB/s): min= 2048, max= 2176, per=4.19%, avg=2080.00, stdev=56.87, samples=20 00:34:20.551 iops : min= 512, max= 544, avg=520.00, stdev=14.22, samples=20 00:34:20.551 lat (msec) : 50=99.69%, 100=0.31% 00:34:20.551 cpu : usr=98.39%, sys=1.20%, ctx=12, majf=0, minf=9 00:34:20.551 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.551 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.552 filename1: (groupid=0, jobs=1): err= 0: pid=2160208: Mon Dec 9 17:44:45 2024 00:34:20.552 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.4MiB/10002msec) 00:34:20.552 slat (nsec): min=8390, max=47197, avg=21433.97, stdev=6919.91 00:34:20.552 clat (usec): min=12328, max=35395, avg=30412.49, stdev=1344.04 00:34:20.552 lat (usec): min=12341, max=35429, avg=30433.92, stdev=1344.32 00:34:20.552 clat percentiles (usec): 00:34:20.552 | 1.00th=[26084], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:20.552 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.552 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:20.552 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:34:20.552 | 99.99th=[35390] 00:34:20.552 bw ( KiB/s): min= 2048, max= 2176, per=4.21%, avg=2088.42, stdev=61.13, samples=19 00:34:20.552 iops : min= 512, max= 544, avg=522.11, stdev=15.28, samples=19 00:34:20.552 lat (msec) : 20=0.65%, 50=99.35% 00:34:20.552 cpu : usr=98.71%, sys=0.88%, ctx=14, majf=0, minf=11 00:34:20.552 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.552 filename1: (groupid=0, jobs=1): err= 0: pid=2160209: Mon Dec 9 17:44:45 2024 00:34:20.552 read: IOPS=516, BW=2067KiB/s (2116kB/s)(20.3MiB/10065msec) 00:34:20.552 slat (usec): min=7, max=100, avg=39.17, stdev=22.96 00:34:20.552 clat (msec): min=22, max=101, avg=30.67, stdev= 3.72 00:34:20.552 lat (msec): min=22, max=101, avg=30.70, stdev= 3.72 00:34:20.552 clat percentiles (msec): 00:34:20.552 | 1.00th=[ 25], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:34:20.552 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:34:20.552 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:34:20.552 | 99.00th=[ 37], 99.50th=[ 40], 99.90th=[ 93], 99.95th=[ 94], 00:34:20.552 | 99.99th=[ 102] 00:34:20.552 bw ( KiB/s): min= 1936, max= 2160, per=4.18%, avg=2073.60, stdev=57.90, samples=20 00:34:20.552 iops : min= 484, max= 540, avg=518.40, stdev=14.47, samples=20 00:34:20.552 lat (msec) : 50=99.65%, 100=0.31%, 250=0.04% 00:34:20.552 cpu : usr=98.47%, sys=1.11%, ctx=10, majf=0, minf=9 00:34:20.552 IO depths : 1=1.9%, 2=8.1%, 4=25.0%, 8=54.4%, 16=10.6%, 32=0.0%, >=64=0.0% 00:34:20.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.552 filename1: (groupid=0, jobs=1): err= 0: pid=2160210: Mon Dec 9 17:44:45 2024 00:34:20.552 read: IOPS=522, BW=2089KiB/s (2139kB/s)(20.4MiB/10019msec) 00:34:20.552 slat (nsec): min=7388, max=40658, avg=14292.22, stdev=6143.66 00:34:20.552 clat (usec): min=7395, max=51011, avg=30517.76, stdev=1862.81 00:34:20.552 lat (usec): min=7404, max=51030, avg=30532.05, stdev=1862.67 00:34:20.552 clat percentiles (usec): 00:34:20.552 | 1.00th=[29492], 5.00th=[30278], 10.00th=[30278], 20.00th=[30540], 00:34:20.552 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.552 | 70.00th=[30802], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:34:20.552 | 99.00th=[31327], 99.50th=[31589], 99.90th=[51119], 99.95th=[51119], 00:34:20.552 | 99.99th=[51119] 00:34:20.552 bw ( KiB/s): min= 2048, max= 2176, per=4.20%, avg=2086.40, stdev=60.18, samples=20 00:34:20.552 iops : min= 512, max= 544, avg=521.60, stdev=15.05, samples=20 00:34:20.552 lat (msec) : 10=0.04%, 20=0.84%, 50=98.81%, 100=0.31% 00:34:20.552 cpu : usr=98.48%, sys=1.13%, ctx=12, majf=0, minf=9 00:34:20.552 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.552 filename1: (groupid=0, jobs=1): err= 0: pid=2160212: Mon Dec 9 17:44:45 2024 00:34:20.552 read: IOPS=516, BW=2065KiB/s (2115kB/s)(20.2MiB/10040msec) 00:34:20.552 slat (usec): min=5, max=100, avg=44.08, stdev=23.69 00:34:20.552 clat (usec): min=29614, max=93764, avg=30537.06, stdev=3735.02 00:34:20.552 lat (usec): min=29634, max=93830, avg=30581.14, stdev=3735.29 00:34:20.552 clat percentiles (usec): 00:34:20.552 | 1.00th=[29754], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:20.552 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:34:20.552 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:34:20.552 | 99.00th=[31327], 99.50th=[53216], 99.90th=[93848], 99.95th=[93848], 00:34:20.552 | 99.99th=[93848] 00:34:20.552 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2067.20, stdev=75.15, samples=20 00:34:20.552 iops : min= 480, max= 544, avg=516.80, stdev=18.79, samples=20 00:34:20.552 lat (msec) : 50=99.38%, 100=0.62% 00:34:20.552 cpu : usr=98.62%, sys=0.99%, ctx=12, majf=0, minf=9 00:34:20.552 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.552 filename2: (groupid=0, jobs=1): err= 0: pid=2160213: Mon Dec 9 17:44:45 2024 00:34:20.552 read: IOPS=519, BW=2079KiB/s (2128kB/s)(20.3MiB/10007msec) 00:34:20.552 slat (nsec): min=8200, max=41790, avg=19080.90, stdev=5757.95 00:34:20.552 clat (usec): min=19951, max=51383, avg=30628.61, stdev=1221.53 00:34:20.552 lat (usec): min=19959, max=51402, avg=30647.69, stdev=1221.42 00:34:20.552 clat percentiles (usec): 00:34:20.552 | 1.00th=[30016], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:20.552 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.552 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:20.552 | 99.00th=[31327], 99.50th=[34341], 99.90th=[51119], 99.95th=[51119], 00:34:20.552 | 99.99th=[51643] 00:34:20.552 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2073.60, stdev=66.96, samples=20 00:34:20.552 iops : min= 480, max= 544, avg=518.40, stdev=16.74, samples=20 00:34:20.552 lat (msec) : 20=0.04%, 50=99.65%, 100=0.31% 00:34:20.552 cpu : usr=98.66%, sys=0.94%, ctx=11, majf=0, minf=9 00:34:20.552 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.552 filename2: (groupid=0, jobs=1): err= 0: pid=2160214: Mon Dec 9 17:44:45 2024 00:34:20.552 read: IOPS=519, BW=2078KiB/s (2127kB/s)(20.4MiB/10073msec) 00:34:20.552 slat (nsec): min=7493, max=96755, avg=25670.42, stdev=20183.60 00:34:20.552 clat (usec): min=14057, max=93280, avg=30605.15, stdev=3663.31 00:34:20.552 lat (usec): min=14071, max=93295, avg=30630.82, stdev=3663.26 00:34:20.552 clat percentiles (usec): 00:34:20.552 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:34:20.552 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.552 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[30802], 00:34:20.552 | 99.00th=[31589], 99.50th=[31851], 99.90th=[92799], 99.95th=[92799], 00:34:20.552 | 99.99th=[92799] 00:34:20.552 bw ( KiB/s): min= 2048, max= 2176, per=4.20%, avg=2086.40, stdev=60.18, samples=20 00:34:20.552 iops : min= 512, max= 544, avg=521.60, stdev=15.05, samples=20 00:34:20.552 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:34:20.552 cpu : usr=98.25%, sys=1.35%, ctx=13, majf=0, minf=9 00:34:20.552 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.552 filename2: (groupid=0, jobs=1): err= 0: pid=2160215: Mon Dec 9 17:44:45 2024 00:34:20.552 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10048msec) 00:34:20.552 slat (nsec): min=5644, max=86576, avg=16338.18, stdev=10157.80 00:34:20.552 clat (usec): min=13451, max=95428, avg=30319.65, stdev=4305.43 00:34:20.552 lat (usec): min=13464, max=95447, avg=30335.99, stdev=4305.44 00:34:20.552 clat percentiles (usec): 00:34:20.552 | 1.00th=[18482], 5.00th=[24511], 10.00th=[27657], 20.00th=[30540], 00:34:20.552 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.552 | 70.00th=[30802], 80.00th=[30802], 90.00th=[30802], 95.00th=[31327], 00:34:20.552 | 99.00th=[42730], 99.50th=[53216], 99.90th=[94897], 99.95th=[94897], 00:34:20.552 | 99.99th=[95945] 00:34:20.552 bw ( KiB/s): min= 1920, max= 2256, per=4.25%, avg=2107.20, stdev=69.03, samples=20 00:34:20.552 iops : min= 480, max= 564, avg=526.80, stdev=17.26, samples=20 00:34:20.552 lat (msec) : 20=1.97%, 50=97.35%, 100=0.68% 00:34:20.552 cpu : usr=98.37%, sys=1.22%, ctx=13, majf=0, minf=9 00:34:20.552 IO depths : 1=0.1%, 2=0.4%, 4=2.2%, 8=79.7%, 16=17.6%, 32=0.0%, >=64=0.0% 00:34:20.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 complete : 0=0.0%, 4=89.6%, 8=9.5%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.552 issued rwts: total=5284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.552 filename2: (groupid=0, jobs=1): err= 0: pid=2160216: Mon Dec 9 17:44:45 2024 00:34:20.552 read: IOPS=516, BW=2066KiB/s (2115kB/s)(20.2MiB/10039msec) 00:34:20.552 slat (usec): min=4, max=100, avg=44.03, stdev=23.69 00:34:20.552 clat (usec): min=29601, max=93788, avg=30531.68, stdev=3726.68 00:34:20.552 lat (usec): min=29618, max=93842, avg=30575.70, stdev=3727.03 00:34:20.552 clat percentiles (usec): 00:34:20.552 | 1.00th=[29754], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:20.552 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:34:20.552 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:34:20.552 | 99.00th=[31327], 99.50th=[52691], 99.90th=[93848], 99.95th=[93848], 00:34:20.552 | 99.99th=[93848] 00:34:20.552 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2067.20, stdev=75.15, samples=20 00:34:20.552 iops : min= 480, max= 544, avg=516.80, stdev=18.79, samples=20 00:34:20.552 lat (msec) : 50=99.38%, 100=0.62% 00:34:20.552 cpu : usr=98.68%, sys=0.93%, ctx=15, majf=0, minf=9 00:34:20.553 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.553 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.553 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.553 filename2: (groupid=0, jobs=1): err= 0: pid=2160217: Mon Dec 9 17:44:45 2024 00:34:20.553 read: IOPS=520, BW=2082KiB/s (2132kB/s)(20.5MiB/10084msec) 00:34:20.553 slat (nsec): min=7684, max=53949, avg=21902.67, stdev=7128.20 00:34:20.553 clat (usec): min=12398, max=97809, avg=30553.74, stdev=3921.98 00:34:20.553 lat (usec): min=12414, max=97826, avg=30575.64, stdev=3921.74 00:34:20.553 clat percentiles (usec): 00:34:20.553 | 1.00th=[18482], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:20.553 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.553 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:20.553 | 99.00th=[31327], 99.50th=[31589], 99.90th=[94897], 99.95th=[94897], 00:34:20.553 | 99.99th=[98042] 00:34:20.553 bw ( KiB/s): min= 2048, max= 2176, per=4.22%, avg=2092.80, stdev=62.64, samples=20 00:34:20.553 iops : min= 512, max= 544, avg=523.20, stdev=15.66, samples=20 00:34:20.553 lat (msec) : 20=1.22%, 50=98.48%, 100=0.30% 00:34:20.553 cpu : usr=98.72%, sys=0.89%, ctx=16, majf=0, minf=11 00:34:20.553 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.553 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.553 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.553 filename2: (groupid=0, jobs=1): err= 0: pid=2160218: Mon Dec 9 17:44:45 2024 00:34:20.553 read: IOPS=516, BW=2068KiB/s (2117kB/s)(20.3MiB/10059msec) 00:34:20.553 slat (nsec): min=7665, max=44194, avg=19163.99, stdev=5160.24 00:34:20.553 clat (usec): min=29713, max=66477, avg=30768.52, stdev=2462.05 00:34:20.553 lat (usec): min=29730, max=66492, avg=30787.69, stdev=2461.86 00:34:20.553 clat percentiles (usec): 00:34:20.553 | 1.00th=[30016], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:20.553 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:20.553 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:20.553 | 99.00th=[31589], 99.50th=[51119], 99.90th=[64750], 99.95th=[64750], 00:34:20.553 | 99.99th=[66323] 00:34:20.553 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2073.60, stdev=66.96, samples=20 00:34:20.553 iops : min= 480, max= 544, avg=518.40, stdev=16.74, samples=20 00:34:20.553 lat (msec) : 50=99.38%, 100=0.62% 00:34:20.553 cpu : usr=98.76%, sys=0.86%, ctx=12, majf=0, minf=9 00:34:20.553 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.553 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.553 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.553 filename2: (groupid=0, jobs=1): err= 0: pid=2160219: Mon Dec 9 17:44:45 2024 00:34:20.553 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.4MiB/10067msec) 00:34:20.553 slat (usec): min=6, max=100, avg=44.02, stdev=22.28 00:34:20.553 clat (usec): min=22589, max=93377, avg=30545.34, stdev=3628.51 00:34:20.553 lat (usec): min=22615, max=93419, avg=30589.36, stdev=3628.39 00:34:20.553 clat percentiles (usec): 00:34:20.553 | 1.00th=[26608], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:20.553 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:34:20.553 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:34:20.553 | 99.00th=[31851], 99.50th=[35390], 99.90th=[92799], 99.95th=[92799], 00:34:20.553 | 99.99th=[93848] 00:34:20.553 bw ( KiB/s): min= 2020, max= 2176, per=4.19%, avg=2078.60, stdev=58.03, samples=20 00:34:20.553 iops : min= 505, max= 544, avg=519.65, stdev=14.51, samples=20 00:34:20.553 lat (msec) : 50=99.69%, 100=0.31% 00:34:20.553 cpu : usr=98.74%, sys=0.85%, ctx=19, majf=0, minf=9 00:34:20.553 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:20.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.553 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.553 issued rwts: total=5212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.553 filename2: (groupid=0, jobs=1): err= 0: pid=2160220: Mon Dec 9 17:44:45 2024 00:34:20.553 read: IOPS=516, BW=2066KiB/s (2115kB/s)(20.2MiB/10039msec) 00:34:20.553 slat (usec): min=8, max=100, avg=46.23, stdev=22.55 00:34:20.553 clat (usec): min=29546, max=93605, avg=30539.82, stdev=3709.67 00:34:20.553 lat (usec): min=29560, max=93634, avg=30586.05, stdev=3710.07 00:34:20.553 clat percentiles (usec): 00:34:20.553 | 1.00th=[29754], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:20.553 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:20.553 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:34:20.553 | 99.00th=[31327], 99.50th=[52167], 99.90th=[92799], 99.95th=[93848], 00:34:20.553 | 99.99th=[93848] 00:34:20.553 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2067.35, stdev=74.85, samples=20 00:34:20.553 iops : min= 480, max= 544, avg=516.80, stdev=18.79, samples=20 00:34:20.553 lat (msec) : 50=99.38%, 100=0.62% 00:34:20.553 cpu : usr=98.79%, sys=0.81%, ctx=13, majf=0, minf=9 00:34:20.553 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.553 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.553 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.553 00:34:20.553 Run status group 0 (all jobs): 00:34:20.553 READ: bw=48.5MiB/s (50.8MB/s), 2065KiB/s-2104KiB/s (2115kB/s-2154kB/s), io=489MiB (512MB), run=10002-10084msec 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.553 bdev_null0 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.553 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.554 [2024-12-09 17:44:45.683111] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.554 bdev_null1 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:20.554 { 00:34:20.554 "params": { 00:34:20.554 "name": "Nvme$subsystem", 00:34:20.554 "trtype": "$TEST_TRANSPORT", 00:34:20.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:20.554 "adrfam": "ipv4", 00:34:20.554 "trsvcid": "$NVMF_PORT", 00:34:20.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:20.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:20.554 "hdgst": ${hdgst:-false}, 00:34:20.554 "ddgst": ${ddgst:-false} 00:34:20.554 }, 00:34:20.554 "method": "bdev_nvme_attach_controller" 00:34:20.554 } 00:34:20.554 EOF 00:34:20.554 )") 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:20.554 { 00:34:20.554 "params": { 00:34:20.554 "name": "Nvme$subsystem", 00:34:20.554 "trtype": "$TEST_TRANSPORT", 00:34:20.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:20.554 "adrfam": "ipv4", 00:34:20.554 "trsvcid": "$NVMF_PORT", 00:34:20.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:20.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:20.554 "hdgst": ${hdgst:-false}, 00:34:20.554 "ddgst": ${ddgst:-false} 00:34:20.554 }, 00:34:20.554 "method": "bdev_nvme_attach_controller" 00:34:20.554 } 00:34:20.554 EOF 00:34:20.554 )") 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:20.554 "params": { 00:34:20.554 "name": "Nvme0", 00:34:20.554 "trtype": "tcp", 00:34:20.554 "traddr": "10.0.0.2", 00:34:20.554 "adrfam": "ipv4", 00:34:20.554 "trsvcid": "4420", 00:34:20.554 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:20.554 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:20.554 "hdgst": false, 00:34:20.554 "ddgst": false 00:34:20.554 }, 00:34:20.554 "method": "bdev_nvme_attach_controller" 00:34:20.554 },{ 00:34:20.554 "params": { 00:34:20.554 "name": "Nvme1", 00:34:20.554 "trtype": "tcp", 00:34:20.554 "traddr": "10.0.0.2", 00:34:20.554 "adrfam": "ipv4", 00:34:20.554 "trsvcid": "4420", 00:34:20.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:20.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:20.554 "hdgst": false, 00:34:20.554 "ddgst": false 00:34:20.554 }, 00:34:20.554 "method": "bdev_nvme_attach_controller" 00:34:20.554 }' 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:20.554 17:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:20.554 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:20.554 ... 00:34:20.554 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:20.554 ... 00:34:20.554 fio-3.35 00:34:20.554 Starting 4 threads 00:34:25.821 00:34:25.821 filename0: (groupid=0, jobs=1): err= 0: pid=2162148: Mon Dec 9 17:44:51 2024 00:34:25.821 read: IOPS=2703, BW=21.1MiB/s (22.1MB/s)(106MiB/5042msec) 00:34:25.821 slat (nsec): min=5983, max=65991, avg=10794.63, stdev=7532.61 00:34:25.821 clat (usec): min=752, max=41699, avg=2905.04, stdev=533.69 00:34:25.821 lat (usec): min=779, max=41724, avg=2915.83, stdev=534.03 00:34:25.821 clat percentiles (usec): 00:34:25.821 | 1.00th=[ 1713], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2606], 00:34:25.821 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2999], 00:34:25.821 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3294], 95.00th=[ 3523], 00:34:25.821 | 99.00th=[ 4113], 99.50th=[ 4359], 99.90th=[ 4817], 99.95th=[ 5014], 00:34:25.821 | 99.99th=[ 5604] 00:34:25.821 bw ( KiB/s): min=20816, max=23184, per=26.11%, avg=21801.80, stdev=802.62, samples=10 00:34:25.821 iops : min= 2602, max= 2898, avg=2725.20, stdev=100.30, samples=10 00:34:25.821 lat (usec) : 1000=0.07% 00:34:25.821 lat (msec) : 2=2.25%, 4=96.41%, 10=1.26%, 50=0.01% 00:34:25.821 cpu : usr=96.11%, sys=3.59%, ctx=10, majf=0, minf=9 00:34:25.821 IO depths : 1=0.3%, 2=6.1%, 4=65.8%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.821 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.821 issued rwts: total=13630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.821 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:25.821 filename0: (groupid=0, jobs=1): err= 0: pid=2162149: Mon Dec 9 17:44:51 2024 00:34:25.821 read: IOPS=2629, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec) 00:34:25.821 slat (nsec): min=6061, max=66591, avg=12317.28, stdev=7911.49 00:34:25.821 clat (usec): min=617, max=5559, avg=3004.88, stdev=429.98 00:34:25.821 lat (usec): min=631, max=5566, avg=3017.19, stdev=429.99 00:34:25.821 clat percentiles (usec): 00:34:25.821 | 1.00th=[ 1958], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2737], 00:34:25.821 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:34:25.821 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3720], 00:34:25.821 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 5080], 99.95th=[ 5211], 00:34:25.821 | 99.99th=[ 5342] 00:34:25.821 bw ( KiB/s): min=20352, max=21712, per=25.23%, avg=21068.44, stdev=431.39, samples=9 00:34:25.821 iops : min= 2544, max= 2714, avg=2633.56, stdev=53.92, samples=9 00:34:25.821 lat (usec) : 750=0.03%, 1000=0.04% 00:34:25.821 lat (msec) : 2=1.09%, 4=95.70%, 10=3.13% 00:34:25.821 cpu : usr=95.24%, sys=3.96%, ctx=111, majf=0, minf=9 00:34:25.821 IO depths : 1=0.2%, 2=5.9%, 4=65.6%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.821 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.821 issued rwts: total=13151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.821 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:25.821 filename1: (groupid=0, jobs=1): err= 0: pid=2162150: Mon Dec 9 17:44:51 2024 00:34:25.821 read: IOPS=2639, BW=20.6MiB/s (21.6MB/s)(103MiB/5003msec) 00:34:25.821 slat (nsec): min=5978, max=68904, avg=11539.71, stdev=8297.90 00:34:25.821 clat (usec): min=648, max=43010, avg=2995.67, stdev=1074.61 00:34:25.821 lat (usec): min=654, max=43029, avg=3007.21, stdev=1074.64 00:34:25.821 clat percentiles (usec): 00:34:25.821 | 1.00th=[ 1893], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2704], 00:34:25.821 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:34:25.821 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3425], 95.00th=[ 3654], 00:34:25.821 | 99.00th=[ 4424], 99.50th=[ 4752], 99.90th=[ 5276], 99.95th=[42730], 00:34:25.821 | 99.99th=[43254] 00:34:25.821 bw ( KiB/s): min=19568, max=22080, per=25.26%, avg=21089.78, stdev=799.38, samples=9 00:34:25.821 iops : min= 2446, max= 2760, avg=2636.22, stdev=99.92, samples=9 00:34:25.821 lat (usec) : 750=0.02%, 1000=0.01% 00:34:25.821 lat (msec) : 2=1.51%, 4=95.89%, 10=2.51%, 50=0.06% 00:34:25.821 cpu : usr=96.50%, sys=3.18%, ctx=5, majf=0, minf=9 00:34:25.821 IO depths : 1=0.2%, 2=5.8%, 4=65.7%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.821 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.821 issued rwts: total=13203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.821 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:25.821 filename1: (groupid=0, jobs=1): err= 0: pid=2162151: Mon Dec 9 17:44:51 2024 00:34:25.821 read: IOPS=2526, BW=19.7MiB/s (20.7MB/s)(98.7MiB/5001msec) 00:34:25.821 slat (nsec): min=5982, max=68932, avg=11390.59, stdev=8227.54 00:34:25.821 clat (usec): min=610, max=6231, avg=3132.19, stdev=467.26 00:34:25.821 lat (usec): min=623, max=6245, avg=3143.58, stdev=466.63 00:34:25.821 clat percentiles (usec): 00:34:25.821 | 1.00th=[ 2024], 5.00th=[ 2540], 10.00th=[ 2769], 20.00th=[ 2900], 00:34:25.821 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3097], 00:34:25.821 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3621], 95.00th=[ 4047], 00:34:25.821 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5538], 99.95th=[ 5669], 00:34:25.821 | 99.99th=[ 6194] 00:34:25.821 bw ( KiB/s): min=19312, max=20777, per=24.18%, avg=20187.67, stdev=522.74, samples=9 00:34:25.821 iops : min= 2414, max= 2597, avg=2523.44, stdev=65.32, samples=9 00:34:25.821 lat (usec) : 750=0.02%, 1000=0.10% 00:34:25.821 lat (msec) : 2=0.78%, 4=93.73%, 10=5.37% 00:34:25.821 cpu : usr=96.22%, sys=3.48%, ctx=8, majf=0, minf=9 00:34:25.821 IO depths : 1=0.1%, 2=3.8%, 4=67.5%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.821 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.821 issued rwts: total=12635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.821 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:25.821 00:34:25.821 Run status group 0 (all jobs): 00:34:25.821 READ: bw=81.5MiB/s (85.5MB/s), 19.7MiB/s-21.1MiB/s (20.7MB/s-22.1MB/s), io=411MiB (431MB), run=5001-5042msec 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.821 17:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:25.822 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.822 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.822 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.822 00:34:25.822 real 0m24.247s 00:34:25.822 user 4m52.681s 00:34:25.822 sys 0m4.939s 00:34:25.822 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:25.822 17:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.822 ************************************ 00:34:25.822 END TEST fio_dif_rand_params 00:34:25.822 ************************************ 00:34:25.822 17:44:52 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:25.822 17:44:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:25.822 17:44:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.822 17:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:25.822 ************************************ 00:34:25.822 START TEST fio_dif_digest 00:34:25.822 ************************************ 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:25.822 bdev_null0 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:25.822 [2024-12-09 17:44:52.151775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:25.822 { 00:34:25.822 "params": { 00:34:25.822 "name": "Nvme$subsystem", 00:34:25.822 "trtype": "$TEST_TRANSPORT", 00:34:25.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:25.822 "adrfam": "ipv4", 00:34:25.822 "trsvcid": "$NVMF_PORT", 00:34:25.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:25.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:25.822 "hdgst": ${hdgst:-false}, 00:34:25.822 "ddgst": ${ddgst:-false} 00:34:25.822 }, 00:34:25.822 "method": "bdev_nvme_attach_controller" 00:34:25.822 } 00:34:25.822 EOF 00:34:25.822 )") 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:25.822 "params": { 00:34:25.822 "name": "Nvme0", 00:34:25.822 "trtype": "tcp", 00:34:25.822 "traddr": "10.0.0.2", 00:34:25.822 "adrfam": "ipv4", 00:34:25.822 "trsvcid": "4420", 00:34:25.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:25.822 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:25.822 "hdgst": true, 00:34:25.822 "ddgst": true 00:34:25.822 }, 00:34:25.822 "method": "bdev_nvme_attach_controller" 00:34:25.822 }' 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:25.822 17:44:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:26.080 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:26.080 ... 00:34:26.080 fio-3.35 00:34:26.080 Starting 3 threads 00:34:38.275 00:34:38.275 filename0: (groupid=0, jobs=1): err= 0: pid=2163186: Mon Dec 9 17:45:03 2024 00:34:38.275 read: IOPS=293, BW=36.7MiB/s (38.4MB/s)(368MiB/10045msec) 00:34:38.275 slat (nsec): min=6305, max=60913, avg=18315.60, stdev=5113.62 00:34:38.275 clat (usec): min=5697, max=52397, avg=10193.88, stdev=1272.53 00:34:38.275 lat (usec): min=5708, max=52418, avg=10212.19, stdev=1272.43 00:34:38.275 clat percentiles (usec): 00:34:38.275 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:34:38.275 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:34:38.275 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:34:38.275 | 99.00th=[11863], 99.50th=[12125], 99.90th=[13042], 99.95th=[47973], 00:34:38.275 | 99.99th=[52167] 00:34:38.275 bw ( KiB/s): min=36608, max=38656, per=35.61%, avg=37679.30, stdev=572.35, samples=20 00:34:38.275 iops : min= 286, max= 302, avg=294.35, stdev= 4.44, samples=20 00:34:38.275 lat (msec) : 10=39.51%, 20=60.42%, 50=0.03%, 100=0.03% 00:34:38.275 cpu : usr=95.66%, sys=4.00%, ctx=15, majf=0, minf=0 00:34:38.275 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.275 issued rwts: total=2946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.275 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:38.275 filename0: (groupid=0, jobs=1): err= 0: pid=2163187: Mon Dec 9 17:45:03 2024 00:34:38.275 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(332MiB/10044msec) 00:34:38.275 slat (nsec): min=6468, max=51165, avg=18724.67, stdev=8495.78 00:34:38.275 clat (usec): min=7364, max=54139, avg=11317.63, stdev=1864.43 00:34:38.275 lat (usec): min=7392, max=54179, avg=11336.36, stdev=1864.33 00:34:38.275 clat percentiles (usec): 00:34:38.275 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:34:38.275 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:34:38.275 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:34:38.275 | 99.00th=[13173], 99.50th=[13435], 99.90th=[52691], 99.95th=[52691], 00:34:38.275 | 99.99th=[54264] 00:34:38.275 bw ( KiB/s): min=29952, max=34816, per=32.08%, avg=33945.60, stdev=1100.00, samples=20 00:34:38.275 iops : min= 234, max= 272, avg=265.20, stdev= 8.59, samples=20 00:34:38.275 lat (msec) : 10=4.11%, 20=95.70%, 50=0.08%, 100=0.11% 00:34:38.275 cpu : usr=96.41%, sys=3.29%, ctx=16, majf=0, minf=9 00:34:38.275 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.275 issued rwts: total=2654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.275 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:38.275 filename0: (groupid=0, jobs=1): err= 0: pid=2163188: Mon Dec 9 17:45:03 2024 00:34:38.275 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(338MiB/10044msec) 00:34:38.275 slat (nsec): min=6511, max=46066, avg=18840.31, stdev=8338.84 00:34:38.275 clat (usec): min=6542, max=47222, avg=11107.25, stdev=1246.67 00:34:38.275 lat (usec): min=6558, max=47237, avg=11126.09, stdev=1246.80 00:34:38.275 clat percentiles (usec): 00:34:38.275 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10421], 00:34:38.275 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:34:38.275 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:34:38.275 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14484], 99.95th=[45351], 00:34:38.275 | 99.99th=[47449] 00:34:38.275 bw ( KiB/s): min=33280, max=36096, per=32.68%, avg=34585.60, stdev=714.00, samples=20 00:34:38.275 iops : min= 260, max= 282, avg=270.20, stdev= 5.58, samples=20 00:34:38.275 lat (msec) : 10=7.36%, 20=92.57%, 50=0.07% 00:34:38.275 cpu : usr=96.11%, sys=3.58%, ctx=18, majf=0, minf=2 00:34:38.275 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.275 issued rwts: total=2704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.275 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:38.275 00:34:38.275 Run status group 0 (all jobs): 00:34:38.275 READ: bw=103MiB/s (108MB/s), 33.0MiB/s-36.7MiB/s (34.6MB/s-38.4MB/s), io=1038MiB (1088MB), run=10044-10045msec 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.275 00:34:38.275 real 0m11.127s 00:34:38.275 user 0m36.214s 00:34:38.275 sys 0m1.392s 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:38.275 17:45:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:38.275 ************************************ 00:34:38.275 END TEST fio_dif_digest 00:34:38.275 ************************************ 00:34:38.275 17:45:03 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:38.275 17:45:03 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:38.275 rmmod nvme_tcp 00:34:38.275 rmmod nvme_fabrics 00:34:38.275 rmmod nvme_keyring 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2154795 ']' 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2154795 00:34:38.275 17:45:03 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2154795 ']' 00:34:38.275 17:45:03 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2154795 00:34:38.275 17:45:03 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:38.275 17:45:03 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:38.275 17:45:03 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2154795 00:34:38.275 17:45:03 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:38.275 17:45:03 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:38.275 17:45:03 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2154795' 00:34:38.275 killing process with pid 2154795 00:34:38.275 17:45:03 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2154795 00:34:38.275 17:45:03 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2154795 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:38.275 17:45:03 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:39.656 Waiting for block devices as requested 00:34:39.916 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:39.916 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:39.916 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:40.175 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:40.175 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:40.175 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:40.434 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:40.434 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:40.434 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:40.693 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:40.693 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:40.693 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:40.693 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:40.953 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:40.953 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:40.953 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:41.212 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:41.212 17:45:07 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:41.212 17:45:07 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:41.212 17:45:07 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:41.212 17:45:07 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:41.212 17:45:07 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:41.212 17:45:07 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:41.212 17:45:07 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:41.212 17:45:07 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:41.212 17:45:07 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.212 17:45:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:41.212 17:45:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.747 17:45:09 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:43.747 00:34:43.747 real 1m13.915s 00:34:43.747 user 7m10.669s 00:34:43.747 sys 0m20.143s 00:34:43.747 17:45:09 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:43.747 17:45:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:43.747 ************************************ 00:34:43.747 END TEST nvmf_dif 00:34:43.747 ************************************ 00:34:43.747 17:45:09 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:43.747 17:45:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:43.747 17:45:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:43.747 17:45:09 -- common/autotest_common.sh@10 -- # set +x 00:34:43.747 ************************************ 00:34:43.747 START TEST nvmf_abort_qd_sizes 00:34:43.747 ************************************ 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:43.747 * Looking for test storage... 00:34:43.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:43.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.747 --rc genhtml_branch_coverage=1 00:34:43.747 --rc genhtml_function_coverage=1 00:34:43.747 --rc genhtml_legend=1 00:34:43.747 --rc geninfo_all_blocks=1 00:34:43.747 --rc geninfo_unexecuted_blocks=1 00:34:43.747 00:34:43.747 ' 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:43.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.747 --rc genhtml_branch_coverage=1 00:34:43.747 --rc genhtml_function_coverage=1 00:34:43.747 --rc genhtml_legend=1 00:34:43.747 --rc geninfo_all_blocks=1 00:34:43.747 --rc geninfo_unexecuted_blocks=1 00:34:43.747 00:34:43.747 ' 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:43.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.747 --rc genhtml_branch_coverage=1 00:34:43.747 --rc genhtml_function_coverage=1 00:34:43.747 --rc genhtml_legend=1 00:34:43.747 --rc geninfo_all_blocks=1 00:34:43.747 --rc geninfo_unexecuted_blocks=1 00:34:43.747 00:34:43.747 ' 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:43.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.747 --rc genhtml_branch_coverage=1 00:34:43.747 --rc genhtml_function_coverage=1 00:34:43.747 --rc genhtml_legend=1 00:34:43.747 --rc geninfo_all_blocks=1 00:34:43.747 --rc geninfo_unexecuted_blocks=1 00:34:43.747 00:34:43.747 ' 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.747 17:45:09 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:43.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:43.748 17:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:49.023 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:49.023 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:49.023 Found net devices under 0000:af:00.0: cvl_0_0 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:49.023 Found net devices under 0000:af:00.1: cvl_0_1 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:49.023 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:49.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:49.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:34:49.283 00:34:49.283 --- 10.0.0.2 ping statistics --- 00:34:49.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.283 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:49.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:49.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:34:49.283 00:34:49.283 --- 10.0.0.1 ping statistics --- 00:34:49.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.283 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:49.283 17:45:15 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:52.574 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:52.574 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:53.142 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:53.142 17:45:19 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.142 17:45:19 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:53.142 17:45:19 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:53.142 17:45:19 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.142 17:45:19 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:53.142 17:45:19 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:53.399 17:45:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:53.399 17:45:19 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:53.399 17:45:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.399 17:45:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:53.400 17:45:19 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2171558 00:34:53.400 17:45:19 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:53.400 17:45:19 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2171558 00:34:53.400 17:45:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2171558 ']' 00:34:53.400 17:45:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.400 17:45:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.400 17:45:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.400 17:45:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.400 17:45:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:53.400 [2024-12-09 17:45:19.748985] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:34:53.400 [2024-12-09 17:45:19.749027] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.400 [2024-12-09 17:45:19.823589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:53.400 [2024-12-09 17:45:19.865208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.400 [2024-12-09 17:45:19.865245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.400 [2024-12-09 17:45:19.865252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.400 [2024-12-09 17:45:19.865258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.400 [2024-12-09 17:45:19.865263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.400 [2024-12-09 17:45:19.866596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.400 [2024-12-09 17:45:19.866709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:53.400 [2024-12-09 17:45:19.866814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.400 [2024-12-09 17:45:19.866815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:53.658 17:45:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:53.658 17:45:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:53.658 17:45:19 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:53.658 17:45:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:53.658 17:45:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:53.658 17:45:19 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.658 17:45:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:53.658 17:45:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:53.658 17:45:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:53.658 17:45:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.658 17:45:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:53.658 ************************************ 00:34:53.658 START TEST spdk_target_abort 00:34:53.658 ************************************ 00:34:53.658 17:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:53.658 17:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:53.658 17:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:53.658 17:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.658 17:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:56.938 spdk_targetn1 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:56.938 [2024-12-09 17:45:22.882552] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:56.938 [2024-12-09 17:45:22.938857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:56.938 17:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:00.217 Initializing NVMe Controllers 00:35:00.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:00.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:00.217 Initialization complete. Launching workers. 00:35:00.217 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14485, failed: 0 00:35:00.217 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1365, failed to submit 13120 00:35:00.217 success 671, unsuccessful 694, failed 0 00:35:00.217 17:45:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:00.217 17:45:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:03.497 Initializing NVMe Controllers 00:35:03.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:03.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:03.497 Initialization complete. Launching workers. 00:35:03.497 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8674, failed: 0 00:35:03.497 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1229, failed to submit 7445 00:35:03.497 success 327, unsuccessful 902, failed 0 00:35:03.497 17:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:03.497 17:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:06.774 Initializing NVMe Controllers 00:35:06.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:06.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:06.774 Initialization complete. Launching workers. 00:35:06.774 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38607, failed: 0 00:35:06.774 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2787, failed to submit 35820 00:35:06.774 success 588, unsuccessful 2199, failed 0 00:35:06.774 17:45:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:06.774 17:45:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.774 17:45:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:06.774 17:45:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.774 17:45:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:06.774 17:45:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.774 17:45:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.708 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.708 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2171558 00:35:07.708 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2171558 ']' 00:35:07.708 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2171558 00:35:07.708 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:07.708 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.708 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2171558 00:35:07.708 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:07.708 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:07.708 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2171558' 00:35:07.708 killing process with pid 2171558 00:35:07.708 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2171558 00:35:07.708 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2171558 00:35:07.967 00:35:07.967 real 0m14.216s 00:35:07.967 user 0m54.157s 00:35:07.967 sys 0m2.623s 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.967 ************************************ 00:35:07.967 END TEST spdk_target_abort 00:35:07.967 ************************************ 00:35:07.967 17:45:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:07.967 17:45:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:07.967 17:45:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:07.967 17:45:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:07.967 ************************************ 00:35:07.967 START TEST kernel_target_abort 00:35:07.967 ************************************ 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.967 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.968 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.968 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:07.968 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:07.968 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:07.968 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:07.968 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:07.968 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:07.968 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:07.968 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:07.968 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:07.968 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:07.968 17:45:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:10.505 Waiting for block devices as requested 00:35:10.764 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:10.764 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:10.764 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:11.024 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:11.024 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:11.024 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:11.283 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:11.283 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:11.283 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:11.283 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:11.541 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:11.541 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:11.541 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:11.800 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:11.800 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:11.800 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:12.059 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:12.059 No valid GPT data, bailing 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:12.059 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:12.318 00:35:12.318 Discovery Log Number of Records 2, Generation counter 2 00:35:12.318 =====Discovery Log Entry 0====== 00:35:12.318 trtype: tcp 00:35:12.318 adrfam: ipv4 00:35:12.318 subtype: current discovery subsystem 00:35:12.318 treq: not specified, sq flow control disable supported 00:35:12.318 portid: 1 00:35:12.318 trsvcid: 4420 00:35:12.318 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:12.318 traddr: 10.0.0.1 00:35:12.318 eflags: none 00:35:12.318 sectype: none 00:35:12.318 =====Discovery Log Entry 1====== 00:35:12.318 trtype: tcp 00:35:12.318 adrfam: ipv4 00:35:12.318 subtype: nvme subsystem 00:35:12.318 treq: not specified, sq flow control disable supported 00:35:12.318 portid: 1 00:35:12.318 trsvcid: 4420 00:35:12.318 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:12.318 traddr: 10.0.0.1 00:35:12.318 eflags: none 00:35:12.318 sectype: none 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:12.318 17:45:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:15.602 Initializing NVMe Controllers 00:35:15.602 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:15.602 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:15.602 Initialization complete. Launching workers. 00:35:15.602 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79554, failed: 0 00:35:15.602 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 79554, failed to submit 0 00:35:15.602 success 0, unsuccessful 79554, failed 0 00:35:15.602 17:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:15.602 17:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:18.888 Initializing NVMe Controllers 00:35:18.888 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:18.888 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:18.888 Initialization complete. Launching workers. 00:35:18.888 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146563, failed: 0 00:35:18.888 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28218, failed to submit 118345 00:35:18.888 success 0, unsuccessful 28218, failed 0 00:35:18.888 17:45:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:18.888 17:45:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:21.419 Initializing NVMe Controllers 00:35:21.419 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:21.419 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:21.419 Initialization complete. Launching workers. 00:35:21.419 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 131691, failed: 0 00:35:21.419 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32922, failed to submit 98769 00:35:21.419 success 0, unsuccessful 32922, failed 0 00:35:21.419 17:45:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:21.419 17:45:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:21.419 17:45:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:21.678 17:45:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:21.678 17:45:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:21.678 17:45:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:21.678 17:45:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:21.678 17:45:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:21.678 17:45:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:21.678 17:45:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:24.214 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:24.214 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:24.473 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:25.411 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:25.411 00:35:25.412 real 0m17.520s 00:35:25.412 user 0m8.639s 00:35:25.412 sys 0m5.227s 00:35:25.412 17:45:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:25.412 17:45:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:25.412 ************************************ 00:35:25.412 END TEST kernel_target_abort 00:35:25.412 ************************************ 00:35:25.412 17:45:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:25.412 17:45:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:25.412 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:25.412 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:25.412 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:25.412 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:25.412 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:25.412 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:25.412 rmmod nvme_tcp 00:35:25.412 rmmod nvme_fabrics 00:35:25.412 rmmod nvme_keyring 00:35:25.670 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:25.670 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:25.670 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:25.670 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2171558 ']' 00:35:25.670 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2171558 00:35:25.671 17:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2171558 ']' 00:35:25.671 17:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2171558 00:35:25.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2171558) - No such process 00:35:25.671 17:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2171558 is not found' 00:35:25.671 Process with pid 2171558 is not found 00:35:25.671 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:25.671 17:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:28.205 Waiting for block devices as requested 00:35:28.205 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:28.464 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:28.464 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:28.464 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:28.725 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:28.725 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:28.725 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:28.725 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:28.984 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:28.984 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:28.984 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:29.243 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:29.243 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:29.243 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:29.243 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:29.502 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:29.502 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:29.502 17:45:56 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:29.502 17:45:56 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:29.502 17:45:56 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:29.502 17:45:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:29.502 17:45:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:29.502 17:45:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:29.502 17:45:56 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:29.502 17:45:56 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:29.502 17:45:56 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.502 17:45:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:29.502 17:45:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.597 17:45:58 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:31.597 00:35:31.597 real 0m48.327s 00:35:31.597 user 1m7.119s 00:35:31.597 sys 0m16.535s 00:35:31.597 17:45:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.597 17:45:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:31.597 ************************************ 00:35:31.597 END TEST nvmf_abort_qd_sizes 00:35:31.597 ************************************ 00:35:31.597 17:45:58 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:31.597 17:45:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:31.597 17:45:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.597 17:45:58 -- common/autotest_common.sh@10 -- # set +x 00:35:31.857 ************************************ 00:35:31.857 START TEST keyring_file 00:35:31.857 ************************************ 00:35:31.857 17:45:58 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:31.857 * Looking for test storage... 00:35:31.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:31.857 17:45:58 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:31.857 17:45:58 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:35:31.857 17:45:58 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:31.857 17:45:58 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:31.857 17:45:58 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.857 17:45:58 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:31.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.857 --rc genhtml_branch_coverage=1 00:35:31.857 --rc genhtml_function_coverage=1 00:35:31.857 --rc genhtml_legend=1 00:35:31.857 --rc geninfo_all_blocks=1 00:35:31.857 --rc geninfo_unexecuted_blocks=1 00:35:31.857 00:35:31.857 ' 00:35:31.857 17:45:58 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:31.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.857 --rc genhtml_branch_coverage=1 00:35:31.857 --rc genhtml_function_coverage=1 00:35:31.857 --rc genhtml_legend=1 00:35:31.857 --rc geninfo_all_blocks=1 00:35:31.857 --rc geninfo_unexecuted_blocks=1 00:35:31.857 00:35:31.857 ' 00:35:31.857 17:45:58 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:31.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.857 --rc genhtml_branch_coverage=1 00:35:31.857 --rc genhtml_function_coverage=1 00:35:31.857 --rc genhtml_legend=1 00:35:31.857 --rc geninfo_all_blocks=1 00:35:31.857 --rc geninfo_unexecuted_blocks=1 00:35:31.857 00:35:31.857 ' 00:35:31.857 17:45:58 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:31.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.857 --rc genhtml_branch_coverage=1 00:35:31.857 --rc genhtml_function_coverage=1 00:35:31.857 --rc genhtml_legend=1 00:35:31.857 --rc geninfo_all_blocks=1 00:35:31.857 --rc geninfo_unexecuted_blocks=1 00:35:31.857 00:35:31.857 ' 00:35:31.857 17:45:58 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:31.857 17:45:58 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.857 17:45:58 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.857 17:45:58 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.858 17:45:58 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.858 17:45:58 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.858 17:45:58 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.858 17:45:58 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.858 17:45:58 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.858 17:45:58 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:31.858 17:45:58 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:31.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:31.858 17:45:58 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:31.858 17:45:58 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:31.858 17:45:58 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:31.858 17:45:58 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:31.858 17:45:58 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:31.858 17:45:58 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:31.858 17:45:58 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:31.858 17:45:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:31.858 17:45:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:31.858 17:45:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:31.858 17:45:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:31.858 17:45:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:31.858 17:45:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NtXhfxGiPX 00:35:31.858 17:45:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:31.858 17:45:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:32.117 17:45:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NtXhfxGiPX 00:35:32.117 17:45:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NtXhfxGiPX 00:35:32.117 17:45:58 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.NtXhfxGiPX 00:35:32.117 17:45:58 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:32.117 17:45:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:32.117 17:45:58 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:32.117 17:45:58 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:32.117 17:45:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:32.117 17:45:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:32.117 17:45:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sh6tfHSi2v 00:35:32.117 17:45:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:32.117 17:45:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:32.117 17:45:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:32.117 17:45:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:32.117 17:45:58 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:32.117 17:45:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:32.117 17:45:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:32.117 17:45:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sh6tfHSi2v 00:35:32.117 17:45:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sh6tfHSi2v 00:35:32.117 17:45:58 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.sh6tfHSi2v 00:35:32.117 17:45:58 keyring_file -- keyring/file.sh@30 -- # tgtpid=2180153 00:35:32.117 17:45:58 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:32.117 17:45:58 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2180153 00:35:32.117 17:45:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2180153 ']' 00:35:32.117 17:45:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.117 17:45:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.117 17:45:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.118 17:45:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.118 17:45:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:32.118 [2024-12-09 17:45:58.534110] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:35:32.118 [2024-12-09 17:45:58.534156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180153 ] 00:35:32.118 [2024-12-09 17:45:58.608023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.118 [2024-12-09 17:45:58.648485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.376 17:45:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.376 17:45:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:32.376 17:45:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:32.376 17:45:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.376 17:45:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:32.376 [2024-12-09 17:45:58.860451] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.376 null0 00:35:32.376 [2024-12-09 17:45:58.892510] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:32.376 [2024-12-09 17:45:58.892773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:32.376 17:45:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.376 17:45:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:32.376 17:45:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:32.376 17:45:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:32.376 17:45:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:32.635 [2024-12-09 17:45:58.920578] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:32.635 request: 00:35:32.635 { 00:35:32.635 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:32.635 "secure_channel": false, 00:35:32.635 "listen_address": { 00:35:32.635 "trtype": "tcp", 00:35:32.635 "traddr": "127.0.0.1", 00:35:32.635 "trsvcid": "4420" 00:35:32.635 }, 00:35:32.635 "method": "nvmf_subsystem_add_listener", 00:35:32.635 "req_id": 1 00:35:32.635 } 00:35:32.635 Got JSON-RPC error response 00:35:32.635 response: 00:35:32.635 { 00:35:32.635 "code": -32602, 00:35:32.635 "message": "Invalid parameters" 00:35:32.635 } 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:32.635 17:45:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=2180165 00:35:32.635 17:45:58 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:32.635 17:45:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2180165 /var/tmp/bperf.sock 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2180165 ']' 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:32.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.635 17:45:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:32.635 [2024-12-09 17:45:58.972557] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:35:32.635 [2024-12-09 17:45:58.972598] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180165 ] 00:35:32.635 [2024-12-09 17:45:59.046055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.635 [2024-12-09 17:45:59.087367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.894 17:45:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.894 17:45:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:32.894 17:45:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NtXhfxGiPX 00:35:32.894 17:45:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NtXhfxGiPX 00:35:32.894 17:45:59 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sh6tfHSi2v 00:35:32.894 17:45:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sh6tfHSi2v 00:35:33.153 17:45:59 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:33.153 17:45:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:33.153 17:45:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:33.153 17:45:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:33.153 17:45:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:33.412 17:45:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.NtXhfxGiPX == \/\t\m\p\/\t\m\p\.\N\t\X\h\f\x\G\i\P\X ]] 00:35:33.412 17:45:59 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:33.412 17:45:59 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:33.412 17:45:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:33.412 17:45:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:33.412 17:45:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:33.412 17:45:59 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.sh6tfHSi2v == \/\t\m\p\/\t\m\p\.\s\h\6\t\f\H\S\i\2\v ]] 00:35:33.412 17:45:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:33.412 17:45:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:33.412 17:45:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:33.670 17:45:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:33.670 17:45:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:33.671 17:45:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:33.671 17:46:00 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:33.671 17:46:00 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:33.671 17:46:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:33.671 17:46:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:33.671 17:46:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:33.671 17:46:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:33.671 17:46:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:33.929 17:46:00 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:33.929 17:46:00 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:33.929 17:46:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:34.188 [2024-12-09 17:46:00.517645] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:34.188 nvme0n1 00:35:34.188 17:46:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:34.188 17:46:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:34.188 17:46:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.188 17:46:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.188 17:46:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:34.188 17:46:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.447 17:46:00 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:34.447 17:46:00 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:34.447 17:46:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:34.447 17:46:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.447 17:46:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.447 17:46:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:34.447 17:46:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.706 17:46:01 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:34.706 17:46:01 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:34.706 Running I/O for 1 seconds... 00:35:35.642 19388.00 IOPS, 75.73 MiB/s 00:35:35.642 Latency(us) 00:35:35.642 [2024-12-09T16:46:02.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.642 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:35.642 nvme0n1 : 1.00 19433.13 75.91 0.00 0.00 6574.80 2824.29 13232.03 00:35:35.642 [2024-12-09T16:46:02.182Z] =================================================================================================================== 00:35:35.642 [2024-12-09T16:46:02.182Z] Total : 19433.13 75.91 0.00 0.00 6574.80 2824.29 13232.03 00:35:35.642 { 00:35:35.642 "results": [ 00:35:35.642 { 00:35:35.642 "job": "nvme0n1", 00:35:35.642 "core_mask": "0x2", 00:35:35.642 "workload": "randrw", 00:35:35.642 "percentage": 50, 00:35:35.642 "status": "finished", 00:35:35.642 "queue_depth": 128, 00:35:35.642 "io_size": 4096, 00:35:35.642 "runtime": 1.004316, 00:35:35.642 "iops": 19433.12662548441, 00:35:35.642 "mibps": 75.91065088079847, 00:35:35.642 "io_failed": 0, 00:35:35.642 "io_timeout": 0, 00:35:35.642 "avg_latency_us": 6574.795900814186, 00:35:35.642 "min_latency_us": 2824.289523809524, 00:35:35.642 "max_latency_us": 13232.030476190475 00:35:35.642 } 00:35:35.642 ], 00:35:35.642 "core_count": 1 00:35:35.642 } 00:35:35.642 17:46:02 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:35.642 17:46:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:35.900 17:46:02 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:35.901 17:46:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:35.901 17:46:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:35.901 17:46:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:35.901 17:46:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:35.901 17:46:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.158 17:46:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:36.158 17:46:02 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:36.158 17:46:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:36.158 17:46:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.158 17:46:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.158 17:46:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:36.158 17:46:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.416 17:46:02 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:36.416 17:46:02 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:36.416 17:46:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:36.416 17:46:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:36.416 17:46:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:36.416 17:46:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:36.416 17:46:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:36.416 17:46:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:36.416 17:46:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:36.416 17:46:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:36.416 [2024-12-09 17:46:02.894753] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:36.416 [2024-12-09 17:46:02.895499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130a410 (107): Transport endpoint is not connected 00:35:36.416 [2024-12-09 17:46:02.896494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130a410 (9): Bad file descriptor 00:35:36.416 [2024-12-09 17:46:02.897496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:36.416 [2024-12-09 17:46:02.897507] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:36.416 [2024-12-09 17:46:02.897520] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:36.416 [2024-12-09 17:46:02.897528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:36.416 request: 00:35:36.416 { 00:35:36.416 "name": "nvme0", 00:35:36.416 "trtype": "tcp", 00:35:36.416 "traddr": "127.0.0.1", 00:35:36.416 "adrfam": "ipv4", 00:35:36.416 "trsvcid": "4420", 00:35:36.416 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:36.416 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:36.416 "prchk_reftag": false, 00:35:36.416 "prchk_guard": false, 00:35:36.416 "hdgst": false, 00:35:36.416 "ddgst": false, 00:35:36.416 "psk": "key1", 00:35:36.416 "allow_unrecognized_csi": false, 00:35:36.416 "method": "bdev_nvme_attach_controller", 00:35:36.416 "req_id": 1 00:35:36.416 } 00:35:36.416 Got JSON-RPC error response 00:35:36.416 response: 00:35:36.416 { 00:35:36.416 "code": -5, 00:35:36.416 "message": "Input/output error" 00:35:36.416 } 00:35:36.416 17:46:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:36.416 17:46:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:36.416 17:46:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:36.416 17:46:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:36.416 17:46:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:36.416 17:46:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:36.416 17:46:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.416 17:46:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.416 17:46:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.416 17:46:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.675 17:46:03 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:36.675 17:46:03 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:36.675 17:46:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:36.675 17:46:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.675 17:46:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.675 17:46:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:36.675 17:46:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.934 17:46:03 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:36.934 17:46:03 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:36.934 17:46:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:37.192 17:46:03 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:37.192 17:46:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:37.192 17:46:03 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:37.192 17:46:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.192 17:46:03 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:37.451 17:46:03 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:37.451 17:46:03 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.NtXhfxGiPX 00:35:37.451 17:46:03 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.NtXhfxGiPX 00:35:37.451 17:46:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:37.451 17:46:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.NtXhfxGiPX 00:35:37.451 17:46:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:37.451 17:46:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:37.451 17:46:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:37.451 17:46:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:37.451 17:46:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NtXhfxGiPX 00:35:37.451 17:46:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NtXhfxGiPX 00:35:37.709 [2024-12-09 17:46:04.079705] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NtXhfxGiPX': 0100660 00:35:37.709 [2024-12-09 17:46:04.079730] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:37.709 request: 00:35:37.709 { 00:35:37.709 "name": "key0", 00:35:37.709 "path": "/tmp/tmp.NtXhfxGiPX", 00:35:37.709 "method": "keyring_file_add_key", 00:35:37.709 "req_id": 1 00:35:37.709 } 00:35:37.709 Got JSON-RPC error response 00:35:37.709 response: 00:35:37.709 { 00:35:37.709 "code": -1, 00:35:37.709 "message": "Operation not permitted" 00:35:37.709 } 00:35:37.709 17:46:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:37.709 17:46:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:37.709 17:46:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:37.709 17:46:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:37.709 17:46:04 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.NtXhfxGiPX 00:35:37.709 17:46:04 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NtXhfxGiPX 00:35:37.709 17:46:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NtXhfxGiPX 00:35:37.968 17:46:04 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.NtXhfxGiPX 00:35:37.968 17:46:04 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:37.968 17:46:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:37.968 17:46:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:37.968 17:46:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:37.968 17:46:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:37.968 17:46:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.968 17:46:04 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:37.968 17:46:04 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:37.968 17:46:04 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:37.968 17:46:04 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:37.968 17:46:04 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:37.968 17:46:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:37.968 17:46:04 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:37.968 17:46:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:37.968 17:46:04 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:37.968 17:46:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:38.227 [2024-12-09 17:46:04.661247] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.NtXhfxGiPX': No such file or directory 00:35:38.227 [2024-12-09 17:46:04.661265] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:38.227 [2024-12-09 17:46:04.661282] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:38.227 [2024-12-09 17:46:04.661289] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:38.227 [2024-12-09 17:46:04.661296] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:38.227 [2024-12-09 17:46:04.661302] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:38.227 request: 00:35:38.227 { 00:35:38.227 "name": "nvme0", 00:35:38.227 "trtype": "tcp", 00:35:38.227 "traddr": "127.0.0.1", 00:35:38.227 "adrfam": "ipv4", 00:35:38.227 "trsvcid": "4420", 00:35:38.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:38.227 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:38.227 "prchk_reftag": false, 00:35:38.227 "prchk_guard": false, 00:35:38.227 "hdgst": false, 00:35:38.227 "ddgst": false, 00:35:38.227 "psk": "key0", 00:35:38.227 "allow_unrecognized_csi": false, 00:35:38.227 "method": "bdev_nvme_attach_controller", 00:35:38.227 "req_id": 1 00:35:38.227 } 00:35:38.227 Got JSON-RPC error response 00:35:38.227 response: 00:35:38.227 { 00:35:38.227 "code": -19, 00:35:38.227 "message": "No such device" 00:35:38.227 } 00:35:38.227 17:46:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:38.227 17:46:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:38.227 17:46:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:38.227 17:46:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:38.227 17:46:04 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:38.227 17:46:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:38.486 17:46:04 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:38.486 17:46:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:38.486 17:46:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:38.486 17:46:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:38.486 17:46:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:38.486 17:46:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:38.486 17:46:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VIx8H1Zqev 00:35:38.486 17:46:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:38.486 17:46:04 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:38.486 17:46:04 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:38.486 17:46:04 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:38.486 17:46:04 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:38.486 17:46:04 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:38.486 17:46:04 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:38.486 17:46:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VIx8H1Zqev 00:35:38.486 17:46:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VIx8H1Zqev 00:35:38.486 17:46:04 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.VIx8H1Zqev 00:35:38.486 17:46:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VIx8H1Zqev 00:35:38.486 17:46:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VIx8H1Zqev 00:35:38.743 17:46:05 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:38.743 17:46:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:39.001 nvme0n1 00:35:39.001 17:46:05 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:39.001 17:46:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.001 17:46:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:39.001 17:46:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.001 17:46:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.001 17:46:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.259 17:46:05 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:39.259 17:46:05 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:39.259 17:46:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:39.259 17:46:05 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:39.259 17:46:05 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:39.259 17:46:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.259 17:46:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.259 17:46:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.517 17:46:05 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:39.517 17:46:05 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:39.517 17:46:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.517 17:46:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:39.517 17:46:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.517 17:46:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.517 17:46:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.776 17:46:06 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:39.776 17:46:06 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:39.776 17:46:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:40.034 17:46:06 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:40.034 17:46:06 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:40.034 17:46:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.034 17:46:06 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:40.034 17:46:06 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VIx8H1Zqev 00:35:40.034 17:46:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VIx8H1Zqev 00:35:40.293 17:46:06 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sh6tfHSi2v 00:35:40.293 17:46:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sh6tfHSi2v 00:35:40.551 17:46:06 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.551 17:46:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.810 nvme0n1 00:35:40.810 17:46:07 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:40.810 17:46:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:41.069 17:46:07 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:41.069 "subsystems": [ 00:35:41.069 { 00:35:41.069 "subsystem": "keyring", 00:35:41.069 "config": [ 00:35:41.069 { 00:35:41.069 "method": "keyring_file_add_key", 00:35:41.069 "params": { 00:35:41.069 "name": "key0", 00:35:41.069 "path": "/tmp/tmp.VIx8H1Zqev" 00:35:41.069 } 00:35:41.069 }, 00:35:41.069 { 00:35:41.069 "method": "keyring_file_add_key", 00:35:41.069 "params": { 00:35:41.069 "name": "key1", 00:35:41.069 "path": "/tmp/tmp.sh6tfHSi2v" 00:35:41.069 } 00:35:41.069 } 00:35:41.069 ] 00:35:41.069 }, 00:35:41.069 { 00:35:41.069 "subsystem": "iobuf", 00:35:41.069 "config": [ 00:35:41.069 { 00:35:41.069 "method": "iobuf_set_options", 00:35:41.069 "params": { 00:35:41.069 "small_pool_count": 8192, 00:35:41.069 "large_pool_count": 1024, 00:35:41.069 "small_bufsize": 8192, 00:35:41.069 "large_bufsize": 135168, 00:35:41.069 "enable_numa": false 00:35:41.069 } 00:35:41.069 } 00:35:41.069 ] 00:35:41.069 }, 00:35:41.069 { 00:35:41.069 "subsystem": "sock", 00:35:41.069 "config": [ 00:35:41.069 { 00:35:41.069 "method": "sock_set_default_impl", 00:35:41.069 "params": { 00:35:41.069 "impl_name": "posix" 00:35:41.069 } 00:35:41.069 }, 00:35:41.069 { 00:35:41.069 "method": "sock_impl_set_options", 00:35:41.069 "params": { 00:35:41.069 "impl_name": "ssl", 00:35:41.069 "recv_buf_size": 4096, 00:35:41.069 "send_buf_size": 4096, 00:35:41.069 "enable_recv_pipe": true, 00:35:41.069 "enable_quickack": false, 00:35:41.069 "enable_placement_id": 0, 00:35:41.069 "enable_zerocopy_send_server": true, 00:35:41.069 "enable_zerocopy_send_client": false, 00:35:41.069 "zerocopy_threshold": 0, 00:35:41.069 "tls_version": 0, 00:35:41.069 "enable_ktls": false 00:35:41.069 } 00:35:41.069 }, 00:35:41.069 { 00:35:41.069 "method": "sock_impl_set_options", 00:35:41.069 "params": { 00:35:41.069 "impl_name": "posix", 00:35:41.069 "recv_buf_size": 2097152, 00:35:41.069 "send_buf_size": 2097152, 00:35:41.069 "enable_recv_pipe": true, 00:35:41.069 "enable_quickack": false, 00:35:41.069 "enable_placement_id": 0, 00:35:41.069 "enable_zerocopy_send_server": true, 00:35:41.069 "enable_zerocopy_send_client": false, 00:35:41.069 "zerocopy_threshold": 0, 00:35:41.069 "tls_version": 0, 00:35:41.069 "enable_ktls": false 00:35:41.069 } 00:35:41.069 } 00:35:41.069 ] 00:35:41.069 }, 00:35:41.069 { 00:35:41.069 "subsystem": "vmd", 00:35:41.069 "config": [] 00:35:41.069 }, 00:35:41.069 { 00:35:41.069 "subsystem": "accel", 00:35:41.069 "config": [ 00:35:41.069 { 00:35:41.069 "method": "accel_set_options", 00:35:41.069 "params": { 00:35:41.069 "small_cache_size": 128, 00:35:41.069 "large_cache_size": 16, 00:35:41.069 "task_count": 2048, 00:35:41.069 "sequence_count": 2048, 00:35:41.069 "buf_count": 2048 00:35:41.069 } 00:35:41.069 } 00:35:41.069 ] 00:35:41.069 }, 00:35:41.069 { 00:35:41.069 "subsystem": "bdev", 00:35:41.069 "config": [ 00:35:41.069 { 00:35:41.069 "method": "bdev_set_options", 00:35:41.069 "params": { 00:35:41.069 "bdev_io_pool_size": 65535, 00:35:41.069 "bdev_io_cache_size": 256, 00:35:41.069 "bdev_auto_examine": true, 00:35:41.069 "iobuf_small_cache_size": 128, 00:35:41.069 "iobuf_large_cache_size": 16 00:35:41.069 } 00:35:41.069 }, 00:35:41.069 { 00:35:41.070 "method": "bdev_raid_set_options", 00:35:41.070 "params": { 00:35:41.070 "process_window_size_kb": 1024, 00:35:41.070 "process_max_bandwidth_mb_sec": 0 00:35:41.070 } 00:35:41.070 }, 00:35:41.070 { 00:35:41.070 "method": "bdev_iscsi_set_options", 00:35:41.070 "params": { 00:35:41.070 "timeout_sec": 30 00:35:41.070 } 00:35:41.070 }, 00:35:41.070 { 00:35:41.070 "method": "bdev_nvme_set_options", 00:35:41.070 "params": { 00:35:41.070 "action_on_timeout": "none", 00:35:41.070 "timeout_us": 0, 00:35:41.070 "timeout_admin_us": 0, 00:35:41.070 "keep_alive_timeout_ms": 10000, 00:35:41.070 "arbitration_burst": 0, 00:35:41.070 "low_priority_weight": 0, 00:35:41.070 "medium_priority_weight": 0, 00:35:41.070 "high_priority_weight": 0, 00:35:41.070 "nvme_adminq_poll_period_us": 10000, 00:35:41.070 "nvme_ioq_poll_period_us": 0, 00:35:41.070 "io_queue_requests": 512, 00:35:41.070 "delay_cmd_submit": true, 00:35:41.070 "transport_retry_count": 4, 00:35:41.070 "bdev_retry_count": 3, 00:35:41.070 "transport_ack_timeout": 0, 00:35:41.070 "ctrlr_loss_timeout_sec": 0, 00:35:41.070 "reconnect_delay_sec": 0, 00:35:41.070 "fast_io_fail_timeout_sec": 0, 00:35:41.070 "disable_auto_failback": false, 00:35:41.070 "generate_uuids": false, 00:35:41.070 "transport_tos": 0, 00:35:41.070 "nvme_error_stat": false, 00:35:41.070 "rdma_srq_size": 0, 00:35:41.070 "io_path_stat": false, 00:35:41.070 "allow_accel_sequence": false, 00:35:41.070 "rdma_max_cq_size": 0, 00:35:41.070 "rdma_cm_event_timeout_ms": 0, 00:35:41.070 "dhchap_digests": [ 00:35:41.070 "sha256", 00:35:41.070 "sha384", 00:35:41.070 "sha512" 00:35:41.070 ], 00:35:41.070 "dhchap_dhgroups": [ 00:35:41.070 "null", 00:35:41.070 "ffdhe2048", 00:35:41.070 "ffdhe3072", 00:35:41.070 "ffdhe4096", 00:35:41.070 "ffdhe6144", 00:35:41.070 "ffdhe8192" 00:35:41.070 ] 00:35:41.070 } 00:35:41.070 }, 00:35:41.070 { 00:35:41.070 "method": "bdev_nvme_attach_controller", 00:35:41.070 "params": { 00:35:41.070 "name": "nvme0", 00:35:41.070 "trtype": "TCP", 00:35:41.070 "adrfam": "IPv4", 00:35:41.070 "traddr": "127.0.0.1", 00:35:41.070 "trsvcid": "4420", 00:35:41.070 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:41.070 "prchk_reftag": false, 00:35:41.070 "prchk_guard": false, 00:35:41.070 "ctrlr_loss_timeout_sec": 0, 00:35:41.070 "reconnect_delay_sec": 0, 00:35:41.070 "fast_io_fail_timeout_sec": 0, 00:35:41.070 "psk": "key0", 00:35:41.070 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:41.070 "hdgst": false, 00:35:41.070 "ddgst": false, 00:35:41.070 "multipath": "multipath" 00:35:41.070 } 00:35:41.070 }, 00:35:41.070 { 00:35:41.070 "method": "bdev_nvme_set_hotplug", 00:35:41.070 "params": { 00:35:41.070 "period_us": 100000, 00:35:41.070 "enable": false 00:35:41.070 } 00:35:41.070 }, 00:35:41.070 { 00:35:41.070 "method": "bdev_wait_for_examine" 00:35:41.070 } 00:35:41.070 ] 00:35:41.070 }, 00:35:41.070 { 00:35:41.070 "subsystem": "nbd", 00:35:41.070 "config": [] 00:35:41.070 } 00:35:41.070 ] 00:35:41.070 }' 00:35:41.070 17:46:07 keyring_file -- keyring/file.sh@115 -- # killprocess 2180165 00:35:41.070 17:46:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2180165 ']' 00:35:41.070 17:46:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2180165 00:35:41.070 17:46:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:41.070 17:46:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:41.070 17:46:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2180165 00:35:41.070 17:46:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:41.070 17:46:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:41.070 17:46:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2180165' 00:35:41.070 killing process with pid 2180165 00:35:41.070 17:46:07 keyring_file -- common/autotest_common.sh@973 -- # kill 2180165 00:35:41.070 Received shutdown signal, test time was about 1.000000 seconds 00:35:41.070 00:35:41.070 Latency(us) 00:35:41.070 [2024-12-09T16:46:07.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.070 [2024-12-09T16:46:07.610Z] =================================================================================================================== 00:35:41.070 [2024-12-09T16:46:07.610Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:41.070 17:46:07 keyring_file -- common/autotest_common.sh@978 -- # wait 2180165 00:35:41.329 17:46:07 keyring_file -- keyring/file.sh@118 -- # bperfpid=2181669 00:35:41.330 17:46:07 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2181669 /var/tmp/bperf.sock 00:35:41.330 17:46:07 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2181669 ']' 00:35:41.330 17:46:07 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:41.330 17:46:07 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:41.330 17:46:07 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:41.330 17:46:07 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:41.330 "subsystems": [ 00:35:41.330 { 00:35:41.330 "subsystem": "keyring", 00:35:41.330 "config": [ 00:35:41.330 { 00:35:41.330 "method": "keyring_file_add_key", 00:35:41.330 "params": { 00:35:41.330 "name": "key0", 00:35:41.330 "path": "/tmp/tmp.VIx8H1Zqev" 00:35:41.330 } 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "method": "keyring_file_add_key", 00:35:41.330 "params": { 00:35:41.330 "name": "key1", 00:35:41.330 "path": "/tmp/tmp.sh6tfHSi2v" 00:35:41.330 } 00:35:41.330 } 00:35:41.330 ] 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "subsystem": "iobuf", 00:35:41.330 "config": [ 00:35:41.330 { 00:35:41.330 "method": "iobuf_set_options", 00:35:41.330 "params": { 00:35:41.330 "small_pool_count": 8192, 00:35:41.330 "large_pool_count": 1024, 00:35:41.330 "small_bufsize": 8192, 00:35:41.330 "large_bufsize": 135168, 00:35:41.330 "enable_numa": false 00:35:41.330 } 00:35:41.330 } 00:35:41.330 ] 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "subsystem": "sock", 00:35:41.330 "config": [ 00:35:41.330 { 00:35:41.330 "method": "sock_set_default_impl", 00:35:41.330 "params": { 00:35:41.330 "impl_name": "posix" 00:35:41.330 } 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "method": "sock_impl_set_options", 00:35:41.330 "params": { 00:35:41.330 "impl_name": "ssl", 00:35:41.330 "recv_buf_size": 4096, 00:35:41.330 "send_buf_size": 4096, 00:35:41.330 "enable_recv_pipe": true, 00:35:41.330 "enable_quickack": false, 00:35:41.330 "enable_placement_id": 0, 00:35:41.330 "enable_zerocopy_send_server": true, 00:35:41.330 "enable_zerocopy_send_client": false, 00:35:41.330 "zerocopy_threshold": 0, 00:35:41.330 "tls_version": 0, 00:35:41.330 "enable_ktls": false 00:35:41.330 } 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "method": "sock_impl_set_options", 00:35:41.330 "params": { 00:35:41.330 "impl_name": "posix", 00:35:41.330 "recv_buf_size": 2097152, 00:35:41.330 "send_buf_size": 2097152, 00:35:41.330 "enable_recv_pipe": true, 00:35:41.330 "enable_quickack": false, 00:35:41.330 "enable_placement_id": 0, 00:35:41.330 "enable_zerocopy_send_server": true, 00:35:41.330 "enable_zerocopy_send_client": false, 00:35:41.330 "zerocopy_threshold": 0, 00:35:41.330 "tls_version": 0, 00:35:41.330 "enable_ktls": false 00:35:41.330 } 00:35:41.330 } 00:35:41.330 ] 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "subsystem": "vmd", 00:35:41.330 "config": [] 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "subsystem": "accel", 00:35:41.330 "config": [ 00:35:41.330 { 00:35:41.330 "method": "accel_set_options", 00:35:41.330 "params": { 00:35:41.330 "small_cache_size": 128, 00:35:41.330 "large_cache_size": 16, 00:35:41.330 "task_count": 2048, 00:35:41.330 "sequence_count": 2048, 00:35:41.330 "buf_count": 2048 00:35:41.330 } 00:35:41.330 } 00:35:41.330 ] 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "subsystem": "bdev", 00:35:41.330 "config": [ 00:35:41.330 { 00:35:41.330 "method": "bdev_set_options", 00:35:41.330 "params": { 00:35:41.330 "bdev_io_pool_size": 65535, 00:35:41.330 "bdev_io_cache_size": 256, 00:35:41.330 "bdev_auto_examine": true, 00:35:41.330 "iobuf_small_cache_size": 128, 00:35:41.330 "iobuf_large_cache_size": 16 00:35:41.330 } 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "method": "bdev_raid_set_options", 00:35:41.330 "params": { 00:35:41.330 "process_window_size_kb": 1024, 00:35:41.330 "process_max_bandwidth_mb_sec": 0 00:35:41.330 } 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "method": "bdev_iscsi_set_options", 00:35:41.330 "params": { 00:35:41.330 "timeout_sec": 30 00:35:41.330 } 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "method": "bdev_nvme_set_options", 00:35:41.330 "params": { 00:35:41.330 "action_on_timeout": "none", 00:35:41.330 "timeout_us": 0, 00:35:41.330 "timeout_admin_us": 0, 00:35:41.330 "keep_alive_timeout_ms": 10000, 00:35:41.330 "arbitration_burst": 0, 00:35:41.330 "low_priority_weight": 0, 00:35:41.330 "medium_priority_weight": 0, 00:35:41.330 "high_priority_weight": 0, 00:35:41.330 "nvme_adminq_poll_period_us": 10000, 00:35:41.330 "nvme_ioq_poll_period_us": 0, 00:35:41.330 "io_queue_requests": 512, 00:35:41.330 "delay_cmd_submit": true, 00:35:41.330 "transport_retry_count": 4, 00:35:41.330 "bdev_retry_count": 3, 00:35:41.330 "transport_ack_timeout": 0, 00:35:41.330 "ctrlr_loss_timeout_sec": 0, 00:35:41.330 "reconnect_delay_sec": 0, 00:35:41.330 "fast_io_fail_timeout_sec": 0, 00:35:41.330 "disable_auto_failback": false, 00:35:41.330 "generate_uuids": false, 00:35:41.330 "transport_tos": 0, 00:35:41.330 "nvme_error_stat": false, 00:35:41.330 "rdma_srq_size": 0, 00:35:41.330 "io_path_stat": false, 00:35:41.330 "allow_accel_sequence": false, 00:35:41.330 "rdma_max_cq_size": 0, 00:35:41.330 "rdma_cm_event_timeout_ms": 0, 00:35:41.330 "dhchap_digests": [ 00:35:41.330 "sha256", 00:35:41.330 "sha384", 00:35:41.330 "sha512" 00:35:41.330 ], 00:35:41.330 "dhchap_dhgroups": [ 00:35:41.330 "null", 00:35:41.330 "ffdhe2048", 00:35:41.330 "ffdhe3072", 00:35:41.330 "ffdhe4096", 00:35:41.330 "ffdhe6144", 00:35:41.330 "ffdhe8192" 00:35:41.330 ] 00:35:41.330 } 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "method": "bdev_nvme_attach_controller", 00:35:41.330 "params": { 00:35:41.330 "name": "nvme0", 00:35:41.330 "trtype": "TCP", 00:35:41.330 "adrfam": "IPv4", 00:35:41.330 "traddr": "127.0.0.1", 00:35:41.330 "trsvcid": "4420", 00:35:41.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:41.330 "prchk_reftag": false, 00:35:41.330 "prchk_guard": false, 00:35:41.330 "ctrlr_loss_timeout_sec": 0, 00:35:41.330 "reconnect_delay_sec": 0, 00:35:41.330 "fast_io_fail_timeout_sec": 0, 00:35:41.330 "psk": "key0", 00:35:41.330 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:41.330 "hdgst": false, 00:35:41.330 "ddgst": false, 00:35:41.330 "multipath": "multipath" 00:35:41.330 } 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "method": "bdev_nvme_set_hotplug", 00:35:41.330 "params": { 00:35:41.330 "period_us": 100000, 00:35:41.330 "enable": false 00:35:41.330 } 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "method": "bdev_wait_for_examine" 00:35:41.330 } 00:35:41.330 ] 00:35:41.330 }, 00:35:41.330 { 00:35:41.330 "subsystem": "nbd", 00:35:41.330 "config": [] 00:35:41.330 } 00:35:41.330 ] 00:35:41.330 }' 00:35:41.330 17:46:07 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:41.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:41.330 17:46:07 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:41.330 17:46:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:41.330 [2024-12-09 17:46:07.659475] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:35:41.330 [2024-12-09 17:46:07.659523] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181669 ] 00:35:41.330 [2024-12-09 17:46:07.733343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.330 [2024-12-09 17:46:07.771838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.589 [2024-12-09 17:46:07.933504] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:42.156 17:46:08 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:42.156 17:46:08 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:42.156 17:46:08 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:42.156 17:46:08 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:42.156 17:46:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.414 17:46:08 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:42.414 17:46:08 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:42.414 17:46:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:42.414 17:46:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:42.414 17:46:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.414 17:46:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:42.414 17:46:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.414 17:46:08 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:42.414 17:46:08 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:42.414 17:46:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:42.414 17:46:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:42.414 17:46:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.414 17:46:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.414 17:46:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:42.673 17:46:09 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:42.673 17:46:09 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:42.673 17:46:09 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:42.673 17:46:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:42.932 17:46:09 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:42.932 17:46:09 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:42.932 17:46:09 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.VIx8H1Zqev /tmp/tmp.sh6tfHSi2v 00:35:42.932 17:46:09 keyring_file -- keyring/file.sh@20 -- # killprocess 2181669 00:35:42.932 17:46:09 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2181669 ']' 00:35:42.932 17:46:09 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2181669 00:35:42.932 17:46:09 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:42.932 17:46:09 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:42.932 17:46:09 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2181669 00:35:42.932 17:46:09 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:42.932 17:46:09 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:42.932 17:46:09 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2181669' 00:35:42.932 killing process with pid 2181669 00:35:42.932 17:46:09 keyring_file -- common/autotest_common.sh@973 -- # kill 2181669 00:35:42.932 Received shutdown signal, test time was about 1.000000 seconds 00:35:42.932 00:35:42.932 Latency(us) 00:35:42.932 [2024-12-09T16:46:09.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.932 [2024-12-09T16:46:09.472Z] =================================================================================================================== 00:35:42.932 [2024-12-09T16:46:09.472Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:42.932 17:46:09 keyring_file -- common/autotest_common.sh@978 -- # wait 2181669 00:35:43.191 17:46:09 keyring_file -- keyring/file.sh@21 -- # killprocess 2180153 00:35:43.191 17:46:09 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2180153 ']' 00:35:43.191 17:46:09 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2180153 00:35:43.191 17:46:09 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:43.191 17:46:09 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:43.191 17:46:09 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2180153 00:35:43.191 17:46:09 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:43.191 17:46:09 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:43.191 17:46:09 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2180153' 00:35:43.191 killing process with pid 2180153 00:35:43.191 17:46:09 keyring_file -- common/autotest_common.sh@973 -- # kill 2180153 00:35:43.191 17:46:09 keyring_file -- common/autotest_common.sh@978 -- # wait 2180153 00:35:43.450 00:35:43.450 real 0m11.678s 00:35:43.450 user 0m29.048s 00:35:43.450 sys 0m2.650s 00:35:43.450 17:46:09 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.450 17:46:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:43.450 ************************************ 00:35:43.450 END TEST keyring_file 00:35:43.450 ************************************ 00:35:43.450 17:46:09 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:43.450 17:46:09 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:43.450 17:46:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:43.450 17:46:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:43.450 17:46:09 -- common/autotest_common.sh@10 -- # set +x 00:35:43.450 ************************************ 00:35:43.450 START TEST keyring_linux 00:35:43.450 ************************************ 00:35:43.450 17:46:09 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:43.450 Joined session keyring: 718305798 00:35:43.709 * Looking for test storage... 00:35:43.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:43.709 17:46:10 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:43.709 17:46:10 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:35:43.709 17:46:10 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:43.709 17:46:10 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:43.709 17:46:10 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:43.709 17:46:10 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:43.709 17:46:10 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:43.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.709 --rc genhtml_branch_coverage=1 00:35:43.709 --rc genhtml_function_coverage=1 00:35:43.709 --rc genhtml_legend=1 00:35:43.709 --rc geninfo_all_blocks=1 00:35:43.709 --rc geninfo_unexecuted_blocks=1 00:35:43.709 00:35:43.709 ' 00:35:43.709 17:46:10 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:43.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.709 --rc genhtml_branch_coverage=1 00:35:43.709 --rc genhtml_function_coverage=1 00:35:43.709 --rc genhtml_legend=1 00:35:43.709 --rc geninfo_all_blocks=1 00:35:43.709 --rc geninfo_unexecuted_blocks=1 00:35:43.709 00:35:43.709 ' 00:35:43.709 17:46:10 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:43.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.709 --rc genhtml_branch_coverage=1 00:35:43.709 --rc genhtml_function_coverage=1 00:35:43.709 --rc genhtml_legend=1 00:35:43.709 --rc geninfo_all_blocks=1 00:35:43.709 --rc geninfo_unexecuted_blocks=1 00:35:43.709 00:35:43.710 ' 00:35:43.710 17:46:10 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:43.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.710 --rc genhtml_branch_coverage=1 00:35:43.710 --rc genhtml_function_coverage=1 00:35:43.710 --rc genhtml_legend=1 00:35:43.710 --rc geninfo_all_blocks=1 00:35:43.710 --rc geninfo_unexecuted_blocks=1 00:35:43.710 00:35:43.710 ' 00:35:43.710 17:46:10 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.710 17:46:10 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:43.710 17:46:10 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.710 17:46:10 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.710 17:46:10 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.710 17:46:10 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.710 17:46:10 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.710 17:46:10 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.710 17:46:10 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:43.710 17:46:10 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:43.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:43.710 17:46:10 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:43.710 17:46:10 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:43.710 17:46:10 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:43.710 17:46:10 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:43.710 17:46:10 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:43.710 17:46:10 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:43.710 /tmp/:spdk-test:key0 00:35:43.710 17:46:10 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:43.710 17:46:10 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:43.710 17:46:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:43.710 /tmp/:spdk-test:key1 00:35:43.710 17:46:10 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2182181 00:35:43.710 17:46:10 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2182181 00:35:43.710 17:46:10 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:43.710 17:46:10 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2182181 ']' 00:35:43.710 17:46:10 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.710 17:46:10 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:43.710 17:46:10 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.710 17:46:10 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:43.710 17:46:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:43.969 [2024-12-09 17:46:10.271645] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:35:43.969 [2024-12-09 17:46:10.271693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182181 ] 00:35:43.969 [2024-12-09 17:46:10.344976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.969 [2024-12-09 17:46:10.384387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:44.906 17:46:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:44.906 [2024-12-09 17:46:11.087585] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:44.906 null0 00:35:44.906 [2024-12-09 17:46:11.119638] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:44.906 [2024-12-09 17:46:11.119909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.906 17:46:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:44.906 648571501 00:35:44.906 17:46:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:44.906 212132014 00:35:44.906 17:46:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2182411 00:35:44.906 17:46:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2182411 /var/tmp/bperf.sock 00:35:44.906 17:46:11 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2182411 ']' 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:44.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:44.906 [2024-12-09 17:46:11.194856] Starting SPDK v25.01-pre git sha1 608f2e392 / DPDK 24.03.0 initialization... 00:35:44.906 [2024-12-09 17:46:11.194897] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182411 ] 00:35:44.906 [2024-12-09 17:46:11.269347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.906 [2024-12-09 17:46:11.309884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.906 17:46:11 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:44.906 17:46:11 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:44.906 17:46:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:45.165 17:46:11 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:45.165 17:46:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:45.424 17:46:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:45.424 17:46:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:45.424 [2024-12-09 17:46:11.957962] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:45.683 nvme0n1 00:35:45.683 17:46:12 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:45.683 17:46:12 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:45.683 17:46:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:45.683 17:46:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:45.683 17:46:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:45.683 17:46:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.942 17:46:12 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:45.942 17:46:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:45.942 17:46:12 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:45.942 17:46:12 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:45.942 17:46:12 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.942 17:46:12 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:45.942 17:46:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.942 17:46:12 keyring_linux -- keyring/linux.sh@25 -- # sn=648571501 00:35:45.942 17:46:12 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:45.942 17:46:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:45.942 17:46:12 keyring_linux -- keyring/linux.sh@26 -- # [[ 648571501 == \6\4\8\5\7\1\5\0\1 ]] 00:35:45.942 17:46:12 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 648571501 00:35:45.942 17:46:12 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:45.942 17:46:12 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:46.200 Running I/O for 1 seconds... 00:35:47.134 21878.00 IOPS, 85.46 MiB/s 00:35:47.134 Latency(us) 00:35:47.134 [2024-12-09T16:46:13.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.134 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:47.134 nvme0n1 : 1.01 21878.79 85.46 0.00 0.00 5831.44 4556.31 11109.91 00:35:47.134 [2024-12-09T16:46:13.674Z] =================================================================================================================== 00:35:47.134 [2024-12-09T16:46:13.674Z] Total : 21878.79 85.46 0.00 0.00 5831.44 4556.31 11109.91 00:35:47.134 { 00:35:47.134 "results": [ 00:35:47.134 { 00:35:47.134 "job": "nvme0n1", 00:35:47.134 "core_mask": "0x2", 00:35:47.134 "workload": "randread", 00:35:47.134 "status": "finished", 00:35:47.134 "queue_depth": 128, 00:35:47.134 "io_size": 4096, 00:35:47.134 "runtime": 1.00586, 00:35:47.134 "iops": 21878.790288907006, 00:35:47.134 "mibps": 85.464024566043, 00:35:47.134 "io_failed": 0, 00:35:47.134 "io_timeout": 0, 00:35:47.134 "avg_latency_us": 5831.440068030302, 00:35:47.134 "min_latency_us": 4556.312380952381, 00:35:47.134 "max_latency_us": 11109.91238095238 00:35:47.134 } 00:35:47.134 ], 00:35:47.134 "core_count": 1 00:35:47.134 } 00:35:47.134 17:46:13 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:47.134 17:46:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:47.392 17:46:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:47.392 17:46:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:47.392 17:46:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:47.392 17:46:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:47.392 17:46:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:47.392 17:46:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.651 17:46:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:47.651 17:46:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:47.651 17:46:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:47.651 17:46:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:47.651 17:46:13 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:47.651 17:46:13 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:47.651 17:46:13 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:47.651 17:46:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:47.651 17:46:13 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:47.651 17:46:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:47.651 17:46:13 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:47.651 17:46:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:47.651 [2024-12-09 17:46:14.148304] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:47.651 [2024-12-09 17:46:14.148347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c90b0 (107): Transport endpoint is not connected 00:35:47.651 [2024-12-09 17:46:14.149341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c90b0 (9): Bad file descriptor 00:35:47.651 [2024-12-09 17:46:14.150343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:47.651 [2024-12-09 17:46:14.150360] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:47.651 [2024-12-09 17:46:14.150368] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:47.651 [2024-12-09 17:46:14.150377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:47.651 request: 00:35:47.651 { 00:35:47.651 "name": "nvme0", 00:35:47.651 "trtype": "tcp", 00:35:47.651 "traddr": "127.0.0.1", 00:35:47.651 "adrfam": "ipv4", 00:35:47.651 "trsvcid": "4420", 00:35:47.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:47.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:47.651 "prchk_reftag": false, 00:35:47.651 "prchk_guard": false, 00:35:47.651 "hdgst": false, 00:35:47.651 "ddgst": false, 00:35:47.651 "psk": ":spdk-test:key1", 00:35:47.651 "allow_unrecognized_csi": false, 00:35:47.651 "method": "bdev_nvme_attach_controller", 00:35:47.651 "req_id": 1 00:35:47.651 } 00:35:47.651 Got JSON-RPC error response 00:35:47.651 response: 00:35:47.651 { 00:35:47.651 "code": -5, 00:35:47.651 "message": "Input/output error" 00:35:47.651 } 00:35:47.651 17:46:14 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:47.651 17:46:14 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:47.651 17:46:14 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:47.651 17:46:14 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@33 -- # sn=648571501 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 648571501 00:35:47.651 1 links removed 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@33 -- # sn=212132014 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 212132014 00:35:47.651 1 links removed 00:35:47.651 17:46:14 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2182411 00:35:47.651 17:46:14 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2182411 ']' 00:35:47.651 17:46:14 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2182411 00:35:47.651 17:46:14 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2182411 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2182411' 00:35:47.910 killing process with pid 2182411 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@973 -- # kill 2182411 00:35:47.910 Received shutdown signal, test time was about 1.000000 seconds 00:35:47.910 00:35:47.910 Latency(us) 00:35:47.910 [2024-12-09T16:46:14.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.910 [2024-12-09T16:46:14.450Z] =================================================================================================================== 00:35:47.910 [2024-12-09T16:46:14.450Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@978 -- # wait 2182411 00:35:47.910 17:46:14 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2182181 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2182181 ']' 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2182181 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2182181 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2182181' 00:35:47.910 killing process with pid 2182181 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@973 -- # kill 2182181 00:35:47.910 17:46:14 keyring_linux -- common/autotest_common.sh@978 -- # wait 2182181 00:35:48.477 00:35:48.477 real 0m4.818s 00:35:48.477 user 0m8.812s 00:35:48.477 sys 0m1.449s 00:35:48.477 17:46:14 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.477 17:46:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:48.478 ************************************ 00:35:48.478 END TEST keyring_linux 00:35:48.478 ************************************ 00:35:48.478 17:46:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:48.478 17:46:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:48.478 17:46:14 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:48.478 17:46:14 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:48.478 17:46:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:48.478 17:46:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:48.478 17:46:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:48.478 17:46:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:48.478 17:46:14 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:48.478 17:46:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:48.478 17:46:14 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:48.478 17:46:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:48.478 17:46:14 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:48.478 17:46:14 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:48.478 17:46:14 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:48.478 17:46:14 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:48.478 17:46:14 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:48.478 17:46:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:48.478 17:46:14 -- common/autotest_common.sh@10 -- # set +x 00:35:48.478 17:46:14 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:48.478 17:46:14 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:48.478 17:46:14 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:48.478 17:46:14 -- common/autotest_common.sh@10 -- # set +x 00:35:53.750 INFO: APP EXITING 00:35:53.750 INFO: killing all VMs 00:35:53.750 INFO: killing vhost app 00:35:53.750 INFO: EXIT DONE 00:35:57.039 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:57.039 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:57.039 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:59.573 Cleaning 00:35:59.573 Removing: /var/run/dpdk/spdk0/config 00:35:59.573 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:59.573 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:59.573 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:59.573 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:59.573 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:59.573 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:59.573 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:59.573 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:59.573 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:59.573 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:59.573 Removing: /var/run/dpdk/spdk1/config 00:35:59.573 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:59.573 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:59.573 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:59.573 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:59.573 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:59.573 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:59.573 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:59.573 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:59.573 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:59.573 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:59.573 Removing: /var/run/dpdk/spdk2/config 00:35:59.573 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:59.573 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:59.573 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:59.573 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:59.573 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:59.573 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:59.573 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:59.573 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:59.833 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:59.833 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:59.833 Removing: /var/run/dpdk/spdk3/config 00:35:59.833 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:59.833 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:59.833 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:59.833 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:59.833 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:59.833 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:59.833 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:59.833 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:59.833 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:59.833 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:59.833 Removing: /var/run/dpdk/spdk4/config 00:35:59.833 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:59.833 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:59.833 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:59.833 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:59.833 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:59.833 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:59.833 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:59.833 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:59.833 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:59.833 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:59.833 Removing: /dev/shm/bdev_svc_trace.1 00:35:59.833 Removing: /dev/shm/nvmf_trace.0 00:35:59.833 Removing: /dev/shm/spdk_tgt_trace.pid1706679 00:35:59.833 Removing: /var/run/dpdk/spdk0 00:35:59.833 Removing: /var/run/dpdk/spdk1 00:35:59.833 Removing: /var/run/dpdk/spdk2 00:35:59.833 Removing: /var/run/dpdk/spdk3 00:35:59.833 Removing: /var/run/dpdk/spdk4 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1704587 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1705624 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1706679 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1707307 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1708235 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1708463 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1709410 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1709425 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1709768 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1711453 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1712829 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1713119 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1713403 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1713735 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1714287 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1714626 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1714872 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1715148 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1715875 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1718889 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1719051 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1719306 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1719353 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1719788 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1719948 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1720277 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1720495 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1720747 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1720760 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1721012 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1721024 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1721569 00:35:59.833 Removing: /var/run/dpdk/spdk_pid1721812 00:36:00.092 Removing: /var/run/dpdk/spdk_pid1722108 00:36:00.092 Removing: /var/run/dpdk/spdk_pid1725753 00:36:00.092 Removing: /var/run/dpdk/spdk_pid1730147 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1740230 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1740902 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1745114 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1745368 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1749568 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1755507 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1758217 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1768821 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1777579 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1779373 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1780271 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1797038 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1801040 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1847442 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1852738 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1858719 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1865381 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1865469 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1866244 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1867045 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1867933 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1868594 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1868601 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1868827 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1868872 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1869034 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1869846 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1870630 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1871520 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1872185 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1872187 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1872412 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1873456 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1874546 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1882693 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1911358 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1915773 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1917464 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1919213 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1919349 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1919576 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1919592 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1920086 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1921878 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1922756 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1923127 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1925363 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1925838 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1926338 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1930655 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1936540 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1936541 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1936542 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1940256 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1948769 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1952819 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1958912 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1960186 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1961701 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1963218 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1967842 00:36:00.093 Removing: /var/run/dpdk/spdk_pid1972103 00:36:00.351 Removing: /var/run/dpdk/spdk_pid1976045 00:36:00.351 Removing: /var/run/dpdk/spdk_pid1983825 00:36:00.351 Removing: /var/run/dpdk/spdk_pid1983827 00:36:00.351 Removing: /var/run/dpdk/spdk_pid1988444 00:36:00.351 Removing: /var/run/dpdk/spdk_pid1988668 00:36:00.351 Removing: /var/run/dpdk/spdk_pid1988893 00:36:00.351 Removing: /var/run/dpdk/spdk_pid1989338 00:36:00.351 Removing: /var/run/dpdk/spdk_pid1989343 00:36:00.351 Removing: /var/run/dpdk/spdk_pid1993955 00:36:00.351 Removing: /var/run/dpdk/spdk_pid1994434 00:36:00.352 Removing: /var/run/dpdk/spdk_pid1998782 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2001461 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2006747 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2011976 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2020566 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2027684 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2027689 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2046802 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2047273 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2047936 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2048398 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2049119 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2049598 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2050265 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2050727 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2054900 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2055133 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2061068 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2061193 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2066500 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2070691 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2080928 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2081397 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2085687 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2086033 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2090145 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2095728 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2098242 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2108197 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2116782 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2118489 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2119379 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2135740 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2139480 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2142253 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2149890 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2149895 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2154946 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2156810 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2158784 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2159909 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2161834 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2163069 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2172163 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2172613 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2173070 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2175488 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2175941 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2176395 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2180153 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2180165 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2181669 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2182181 00:36:00.352 Removing: /var/run/dpdk/spdk_pid2182411 00:36:00.352 Clean 00:36:00.610 17:46:26 -- common/autotest_common.sh@1453 -- # return 0 00:36:00.610 17:46:26 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:00.610 17:46:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:00.610 17:46:26 -- common/autotest_common.sh@10 -- # set +x 00:36:00.610 17:46:26 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:00.610 17:46:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:00.610 17:46:26 -- common/autotest_common.sh@10 -- # set +x 00:36:00.610 17:46:27 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:00.610 17:46:27 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:00.610 17:46:27 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:00.610 17:46:27 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:00.610 17:46:27 -- spdk/autotest.sh@398 -- # hostname 00:36:00.610 17:46:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:00.869 geninfo: WARNING: invalid characters removed from testname! 00:36:22.797 17:46:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:24.174 17:46:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:26.077 17:46:52 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:27.981 17:46:54 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:29.884 17:46:56 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:31.838 17:46:58 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:33.778 17:46:59 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:33.778 17:46:59 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:33.778 17:46:59 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:33.778 17:46:59 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:33.778 17:46:59 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:33.778 17:46:59 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:33.778 + [[ -n 1626489 ]] 00:36:33.778 + sudo kill 1626489 00:36:33.787 [Pipeline] } 00:36:33.803 [Pipeline] // stage 00:36:33.808 [Pipeline] } 00:36:33.823 [Pipeline] // timeout 00:36:33.828 [Pipeline] } 00:36:33.842 [Pipeline] // catchError 00:36:33.847 [Pipeline] } 00:36:33.862 [Pipeline] // wrap 00:36:33.868 [Pipeline] } 00:36:33.880 [Pipeline] // catchError 00:36:33.889 [Pipeline] stage 00:36:33.891 [Pipeline] { (Epilogue) 00:36:33.903 [Pipeline] catchError 00:36:33.905 [Pipeline] { 00:36:33.917 [Pipeline] echo 00:36:33.919 Cleanup processes 00:36:33.925 [Pipeline] sh 00:36:34.210 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:34.210 2193238 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:34.222 [Pipeline] sh 00:36:34.506 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:34.506 ++ grep -v 'sudo pgrep' 00:36:34.506 ++ awk '{print $1}' 00:36:34.506 + sudo kill -9 00:36:34.506 + true 00:36:34.518 [Pipeline] sh 00:36:34.802 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:47.022 [Pipeline] sh 00:36:47.307 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:47.307 Artifacts sizes are good 00:36:47.320 [Pipeline] archiveArtifacts 00:36:47.327 Archiving artifacts 00:36:47.444 [Pipeline] sh 00:36:47.730 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:47.742 [Pipeline] cleanWs 00:36:47.752 [WS-CLEANUP] Deleting project workspace... 00:36:47.752 [WS-CLEANUP] Deferred wipeout is used... 00:36:47.759 [WS-CLEANUP] done 00:36:47.760 [Pipeline] } 00:36:47.777 [Pipeline] // catchError 00:36:47.787 [Pipeline] sh 00:36:48.069 + logger -p user.info -t JENKINS-CI 00:36:48.078 [Pipeline] } 00:36:48.092 [Pipeline] // stage 00:36:48.097 [Pipeline] } 00:36:48.111 [Pipeline] // node 00:36:48.116 [Pipeline] End of Pipeline 00:36:48.152 Finished: SUCCESS